paperhash
stringlengths
40
40
s2_corpus_id
stringlengths
3
9
arxiv_id
stringclasses
0 values
title
stringlengths
7
324
abstract
stringlengths
0
7.23k
authors
sequence
summary
stringclasses
0 values
field_of_study
sequencelengths
venue
stringlengths
15
253
publication_date
stringdate
1952-06-01 00:00:00
2019-07-01 00:00:00
n_references
int32
0
4.92k
n_citations
int32
0
84.2k
n_influential_citations
int32
introduction
stringlengths
15
173k
background
stringlengths
2
115k
methodology
stringlengths
40
140k
experiments_results
stringlengths
1
142k
conclusion
stringlengths
7
38k
full_text
stringlengths
29
195k
decision
bool
0 classes
decision_text
stringclasses
0 values
reviews
sequence
comments
sequence
references
sequence
hypothesis
stringlengths
105
1.27k
month_since_publication
int32
67
872
avg_citations_per_month
float32
0
1.24k
mean_score
float32
mean_confidence
float32
mean_novelty
float32
mean_correctness
float32
mean_clarity
float32
mean_impact
float32
mean_reproducibility
float32
openreview_submission_id
stringclasses
0 values
e3fb7046065ba419783d53604c3713c3b732e198
220811
null
Using Natural Language Descriptions to Improve the Usability of Databases
This paper describes the REGIS extended co~nd language, a relational data fan&c/age that allows users to name and describe database objects using natural language phrases. REGIS accepts multlple-word phrases as the names of tables and columns (unlike most systems, which restrict these names to a few characters). An extended command parser uses a networkstructured dictionary to recognize multl-word names, even if some of the words are missing or out of order, and to prompt the user if an ambiguous name is entered. REGIS also provides facilities for attaching descriptive text to database objects, which can be displayed online or included in printed reports. Initial data from a few databases indicate that users choose to take advantage of the naturalness of multl-word descriptions when this option is available.
{ "name": [ "Hafner, Carole D. and", "Joyce, John D." ], "affiliation": [ null, null ] }
null
null
First Conference on Applied Natural Language Processing
1983-02-01
7
2
null
The REGIS extended command language is a relational data language that allows users to name and describe database objects using natural language phrases. REGIS [4] is an interactive data management system that has been in use at General Motors since 1975. The system is designed to be easy for non-progrs~nmers to understand, and it has given many people their first hands-on experience with computers.A REGIS database consists of a hierarchical structure of named objects: one or more files, each containing zero or more tables, each composed of zero or more columns of data. REGIS users can create, query, or modify database objects interactively, using simple keywordbased relational conLmands, such as the following: The usability of database query languages has been recognized as an important problem (Codd [i] , Greenblatt and Wax~nan [2] , Welty and Stemple [5] ); however, a closely related issue that has not been addressed is the usability of the data itself. In order to interact with a database effectively, users must be able to understand and refer to the objects in the database. Current database systems restrict the names of database objects to a few characters, which can lead to cryptic abbreviations that are difficult to understand and remember. Documentation facilities (if they exist at all) are not designed to be accessed interactively. The need to refer to external sources for descriptive information, and the need to remember cryptic abbreviations, are obstacles to usability that are especially disruptive to the new or occasional user of a database.BLUES =To provide a more supportive environment for data management, a new commana interface has been added to REGIS, which accepts multipleword phrases as the names of tables and columns, and which also provides on-line documentation capabilities. Multiple-wore names can be up to 40 characters long, instead of the previous REGIS limit of 8 characters. "Comment" data consisting of descriptive text can be attached to files, tables, or columns. Users can display the co~ents for parLs of the database: e.g., for all the tables in a file, for a particular table, for a table and all zf its columns, or for a particular column. Taoie names, column names, and comments can De created, queriea, and changed interactively.A straightforward implementation of multiword names for database objects woulc not be practical, since it would significantly increase the amount of typing required during contmand input. Commands would become much longer, leading to slow and tedious interaction, and increasing the number of typing errors. To solve this problem, a flexible recognition procedure is used in REGIS, which recognizes multi-word names even if some of the words are missing or out of order. Users are able to refer to database objects by speclfylr~ any part of the name: for example, if the name of an object is "RESULTS OF FIRST TEST", the user can enter "FIRST TEST", "TEST RESULTS", "FIRST RESULTS", or just "RESULTS", and the object will be located. If an ambiguous name is entered, the user is prompted with a list of choices and asked to select one. Figure 1 shows part of a REGIS table, for an application that was converted from the original version of REGIS to the extended command version. Each column in the table represents a question that was asked in a survey of consumer attitudes. The table illustrates both the difficulty of finding descriptive abbreviations for data in some applications, and the importance of the flexible recognition procedure to the success of the system (users would be unlikely to use long, descriptive names if they were not able to refer to them more briefly when typing commands).Flexible recognition of names provides a user-friendly environment for data management, where a user is not required to know the exact names of database objects. If a REGIS user enters the command "LIST SURVEY" and there are several surveys in the database, the system will display the following: does not provide enough1 information to select the correct one, the user can cancel the command and examlne the database further by displaying "Comment" data. (See Section IV for a dlsoussion of the comment feature.)"SURVEY"The implementation of flexible name recosnltlon in REGIS has required significant extension Of both the relational database schema and the command parser. The schema has been extended to include a network-structured application dictionary, containing all of the words that occur in the user's table and column names. Each word has "TABLE" links connecting it tO the tables it describes, and "COLUMN" links connecting it to the columns it describes. A name recognition algorithm (described in Hafner [3] ) traverses these links to determine what object the user is referring to. When an ambiguous reference is entered, the algorithm returns a list of potential choices to be displayed.There are two areas in which the REGIS command parser uses computational linguistic techniques to help it behave more intelligently: in segmenting command strings into distinct parameters; and in restricting the choices for an ambiguous reference. Both of these capaoilities depend on the use of a command language grammar, which tells the parser what kind of object it is looking for at each point in the parsing process: a table name, a column name, a command name, a keyword parameter from a fixed set, or a numeric parameter. The command language grammar is also used to generate more explicit error feedback than was possible in the previous version of REGIS.Knowledge of both the command language syntax and the extended database schema is required to determine how the input should be segmented. In ordinary database query languages, segmenting a command string into parameters is not a problem; each word or "token" represents one object. Using multi-word names, however, the system cannot use blanks as delimiters.(Requiring other delimiters, such as commas or semi-colons, was rejected as being too inconvenient for users.) When the command parser is looking for a table or column name, it invokes the name recognition algorithm; when the parser is looking for a REGIS keyword or other value, it reverts to the token processing mode.In selecting choices for an ambiguous reference, REGIS uses knowledge about both the syntax and the semantics of the command language: in many REGIS commands, a table name appears in one place in the command, and column names from that table appear in other positions. When this occurs, the co.and parser knows that the column names should only be compared with other columns in the given table; it will not find ambiguities with columns from other tables.CREATING AND DISPLAYING COMMENT DATAThe comment feature of REGIS allows descriptive.text to be incorporated into a database and displayed on request. Comments are created and attached to a database object by entering the command that is normally used to create the object, followed by the keyword COF~NT, followed by an unrestricted amount of text. The commands shown below would cause the text following the keyword COMMENT to De attached to a file, a table, and a column, respectively: DEFINE FILE1 COMMENT .... Figure 2 shows the comment for one column of the survey database described An Section II. The comment tells exactly what question was asked of the respondents, and shows how their answers were encoded in the database.Both the original version of REGIS and the extended co~nd version are in production use at General Motors.Initial data from a few databases indicate that users choose to take advantage of the naturalness of multi-word descriptions when this option is available.In a sample of applications running on the original version of REGIS, we found that only 35% of the column names were English words, as compared with 935 for the extended version. The average number of words per column name in the extended version was 2.4.(This result may be biased in favor of English words, since the users of the new version were aware that they were part of an experiment.)In/ormal contact with users indicates that the ability to incorporate descriptive comments into a database is a useful feature which contributes to the overall task of information management.Several users of the original version of REGIS have decided to change over to the new version in order to take advantage of the on-llne doc,--entatlon capability.We expected that the potential for ambiguous references would cause some difficulties (and perhaps objections) on the part of users; however, these difficulties have not occurred. Referring to a database object by a subset of the words in its name is a concept that users understand and are able to manipulate (sometimes rather inEeniously) to create applications that are responsive to their needs.The REGIS extended command language Incorporates natural lansuage descriptions into a user's database in a flexible and easy-to-use manner.The recognltlon of partly-specified names and the ability to recover from ambiguity are features that are not found in other data management systems.REGIS does not have the power of a natural language understanding system; syntactic variants of object names will only be reco~ized if they contain the same words as the original name, and syntactic variants of commands are not supported at all. However, on the positive side, REGIS does not require a linguist or database administrator to explicitly create an application dictionary; the dictionary is created automatically by the system, and is updated dynamically when users add, delete, or rename objects.The REGIS extended command language required approximately two work-years of effort to develop, much of it devoted to integrating the extended capabilities into the REGIS production environment.The project's goal, to deliver a limited capability for English language description directly into the hands of users, has been accomplished. Future studies of the use of this facility in the production environment will provide feedback on the linguistic habits and priorities of database users.
null
null
null
null
Main paper: i introduction: The REGIS extended command language is a relational data language that allows users to name and describe database objects using natural language phrases. REGIS [4] is an interactive data management system that has been in use at General Motors since 1975. The system is designed to be easy for non-progrs~nmers to understand, and it has given many people their first hands-on experience with computers.A REGIS database consists of a hierarchical structure of named objects: one or more files, each containing zero or more tables, each composed of zero or more columns of data. REGIS users can create, query, or modify database objects interactively, using simple keywordbased relational conLmands, such as the following: The usability of database query languages has been recognized as an important problem (Codd [i] , Greenblatt and Wax~nan [2] , Welty and Stemple [5] ); however, a closely related issue that has not been addressed is the usability of the data itself. In order to interact with a database effectively, users must be able to understand and refer to the objects in the database. Current database systems restrict the names of database objects to a few characters, which can lead to cryptic abbreviations that are difficult to understand and remember. Documentation facilities (if they exist at all) are not designed to be accessed interactively. The need to refer to external sources for descriptive information, and the need to remember cryptic abbreviations, are obstacles to usability that are especially disruptive to the new or occasional user of a database.BLUES =To provide a more supportive environment for data management, a new commana interface has been added to REGIS, which accepts multipleword phrases as the names of tables and columns, and which also provides on-line documentation capabilities. Multiple-wore names can be up to 40 characters long, instead of the previous REGIS limit of 8 characters. "Comment" data consisting of descriptive text can be attached to files, tables, or columns. Users can display the co~ents for parLs of the database: e.g., for all the tables in a file, for a particular table, for a table and all zf its columns, or for a particular column. Taoie names, column names, and comments can De created, queriea, and changed interactively.A straightforward implementation of multiword names for database objects woulc not be practical, since it would significantly increase the amount of typing required during contmand input. Commands would become much longer, leading to slow and tedious interaction, and increasing the number of typing errors. To solve this problem, a flexible recognition procedure is used in REGIS, which recognizes multi-word names even if some of the words are missing or out of order. Users are able to refer to database objects by speclfylr~ any part of the name: for example, if the name of an object is "RESULTS OF FIRST TEST", the user can enter "FIRST TEST", "TEST RESULTS", "FIRST RESULTS", or just "RESULTS", and the object will be located. If an ambiguous name is entered, the user is prompted with a list of choices and asked to select one. Figure 1 shows part of a REGIS table, for an application that was converted from the original version of REGIS to the extended command version. Each column in the table represents a question that was asked in a survey of consumer attitudes. The table illustrates both the difficulty of finding descriptive abbreviations for data in some applications, and the importance of the flexible recognition procedure to the success of the system (users would be unlikely to use long, descriptive names if they were not able to refer to them more briefly when typing commands).Flexible recognition of names provides a user-friendly environment for data management, where a user is not required to know the exact names of database objects. If a REGIS user enters the command "LIST SURVEY" and there are several surveys in the database, the system will display the following: does not provide enough1 information to select the correct one, the user can cancel the command and examlne the database further by displaying "Comment" data. (See Section IV for a dlsoussion of the comment feature.)"SURVEY"The implementation of flexible name recosnltlon in REGIS has required significant extension Of both the relational database schema and the command parser. The schema has been extended to include a network-structured application dictionary, containing all of the words that occur in the user's table and column names. Each word has "TABLE" links connecting it tO the tables it describes, and "COLUMN" links connecting it to the columns it describes. A name recognition algorithm (described in Hafner [3] ) traverses these links to determine what object the user is referring to. When an ambiguous reference is entered, the algorithm returns a list of potential choices to be displayed.There are two areas in which the REGIS command parser uses computational linguistic techniques to help it behave more intelligently: in segmenting command strings into distinct parameters; and in restricting the choices for an ambiguous reference. Both of these capaoilities depend on the use of a command language grammar, which tells the parser what kind of object it is looking for at each point in the parsing process: a table name, a column name, a command name, a keyword parameter from a fixed set, or a numeric parameter. The command language grammar is also used to generate more explicit error feedback than was possible in the previous version of REGIS.Knowledge of both the command language syntax and the extended database schema is required to determine how the input should be segmented. In ordinary database query languages, segmenting a command string into parameters is not a problem; each word or "token" represents one object. Using multi-word names, however, the system cannot use blanks as delimiters.(Requiring other delimiters, such as commas or semi-colons, was rejected as being too inconvenient for users.) When the command parser is looking for a table or column name, it invokes the name recognition algorithm; when the parser is looking for a REGIS keyword or other value, it reverts to the token processing mode.In selecting choices for an ambiguous reference, REGIS uses knowledge about both the syntax and the semantics of the command language: in many REGIS commands, a table name appears in one place in the command, and column names from that table appear in other positions. When this occurs, the co.and parser knows that the column names should only be compared with other columns in the given table; it will not find ambiguities with columns from other tables.CREATING AND DISPLAYING COMMENT DATAThe comment feature of REGIS allows descriptive.text to be incorporated into a database and displayed on request. Comments are created and attached to a database object by entering the command that is normally used to create the object, followed by the keyword COF~NT, followed by an unrestricted amount of text. The commands shown below would cause the text following the keyword COMMENT to De attached to a file, a table, and a column, respectively: DEFINE FILE1 COMMENT .... Figure 2 shows the comment for one column of the survey database described An Section II. The comment tells exactly what question was asked of the respondents, and shows how their answers were encoded in the database.Both the original version of REGIS and the extended co~nd version are in production use at General Motors.Initial data from a few databases indicate that users choose to take advantage of the naturalness of multi-word descriptions when this option is available.In a sample of applications running on the original version of REGIS, we found that only 35% of the column names were English words, as compared with 935 for the extended version. The average number of words per column name in the extended version was 2.4.(This result may be biased in favor of English words, since the users of the new version were aware that they were part of an experiment.)In/ormal contact with users indicates that the ability to incorporate descriptive comments into a database is a useful feature which contributes to the overall task of information management.Several users of the original version of REGIS have decided to change over to the new version in order to take advantage of the on-llne doc,--entatlon capability.We expected that the potential for ambiguous references would cause some difficulties (and perhaps objections) on the part of users; however, these difficulties have not occurred. Referring to a database object by a subset of the words in its name is a concept that users understand and are able to manipulate (sometimes rather inEeniously) to create applications that are responsive to their needs.The REGIS extended command language Incorporates natural lansuage descriptions into a user's database in a flexible and easy-to-use manner.The recognltlon of partly-specified names and the ability to recover from ambiguity are features that are not found in other data management systems.REGIS does not have the power of a natural language understanding system; syntactic variants of object names will only be reco~ized if they contain the same words as the original name, and syntactic variants of commands are not supported at all. However, on the positive side, REGIS does not require a linguist or database administrator to explicitly create an application dictionary; the dictionary is created automatically by the system, and is updated dynamically when users add, delete, or rename objects.The REGIS extended command language required approximately two work-years of effort to develop, much of it devoted to integrating the extended capabilities into the REGIS production environment.The project's goal, to deliver a limited capability for English language description directly into the hands of users, has been accomplished. Future studies of the use of this facility in the production environment will provide feedback on the linguistic habits and priorities of database users. Appendix:
null
null
null
null
{ "paperhash": [ "welty|human_factors_comparison_of_a_procedural_and_a_nonprocedural_query_language", "oliver|performance_monitor_for_a_relational_information_system", "joyce|regis:_a_relational_information_system_with_graphics_and_statistics" ], "title": [ "Human factors comparison of a procedural and a nonprocedural query language", "Performance monitor for a relational information system", "REGIS: a relational information system with graphics and statistics" ], "abstract": [ "Two experiments testing the ability of subjects to write queries in two different query languages were run. The two languages, SQL and TABLET, differ primarily in their procedurality; both languages use the relational data model, and their Halstead levels are similar. Constructs in the languages which do not affect their procedurality are identical. The two languages were learned by the experimental subjects almost exclusively from manuals presenting the same examples and problems ordered identically for both languages. The results of the experiments show that subjects using the more procedural language wrote difficult queries better than subjects using the less procedural language. The results of the experiments are also used to compare corresponding constructs in the two languages and to recommend improvements for these constructs.", "Although some relational information systems have recently become available for production use, very few, if any of them, contain facilities to collect performance data. This paper describes a method for implementing a performance monitor and some of the data collected by this performance monitor which was recently installed in the REGIS (RElational General Information System). REGIS is currently being used within General Motors. The performance monitor is used to collect data about the usefulness of the command language, performance improvements following major system upgrades and performance predictions based on past runs. While installing the performance monitor several system deficiencies were uncovered. Correction of the deficiencies has already improved performance by almost an order of magnitude. Future improvements are expected to improve performance by at least another order of magnitude.", "While the relational data management model has been known for some time, it has yet to be proven that such systems can perform efficiently in an industrial environment. This paper describes user experience with and the external highlights of the RElational General Information System (REGIS) which is currently being used within General Motors. This data analysis system combines the features of relational information handling along with graphical, interactive and statistical capabilities. REGIS provides the flexibility of handling unforeseen queries and enables the user to interactively analyze his data by entering commands from terminals. Its use does not require any conventional programming effort. It is possible to interface to user written functions if the need arises." ], "authors": [ { "name": [ "Charles Welty", "D. Stemple" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "N. N. Oliver", "J. Joyce" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Joyce", "N. N. Oliver" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null ], "s2_corpus_id": [ "6107239", "2708652", "16470135" ], "intents": [ [], [], [ "background" ] ], "isInfluential": [ false, false, false ] }
null
504
0.003968
null
null
null
null
null
null
null
null
4b429dd92e71f60a3aa13c8234fe918f9dd7146a
13057309
null
Automatic Analysis of Descriptive Texts
This paper describes a system that attempts to interpret descriptive texts without the uJe of complex grammars. The purpose of the system is to transform the descriptions to a standard form which may be used as the basis of a database system knowledgeable in the subject matter of the teXt.
{ "name": [ "Cowie, James R." ], "affiliation": [ null ] }
null
null
First Conference on Applied Natural Language Processing
1983-02-01
11
29
null
The texts currently used are wild plant descriptions taken directly from a popular book on the subject. Properties such as size, shape and colour are abstracted from the descriptions and related to parts of the plant in which we are interested.The resulting output is a standardined hierarchical structure holding only significant features of the description.The system, implemented in the PROLOG programming language, uses keywords co identify the way segments of the text relate to the object described.Information on words is held in a keyword list of nouns relating to parts of the object described. A dictionary contains the attributes of ordinary words used by the system to analyse the text.The text is divided into seE" ments using information provided by conjunctions and punctuation.About half the texts processed are correctly analysed at present. Proposals are made for future work to improve this figure.There seems Co be no inherent reason why the technique cannot be generalised so chac any text of seml-standard descriptions can be automatically converted to a canonical form.A lot of useful information, covering many subject areas, is presently available in printed form in catalogues, directories and guides. Good" examples are plants in "Collins Pocket Guide to Wild Flowers", aeroplanes in "Jane's All the World's Aircraft" and people in '~ho's Who". Because chls informaClon is represented in a stylised form, it is amenable CO machine processing Co abstract salient details concerning the entity being described.The research described here is part of a long term project to develop a system which can "read" descriptive text and so become an expert on the -~terial which has been read.The first stage of this research is to establish that it is indeed possible co abstract useful information from descriptive text and we have chosen as a typical example a text consisting of descriptions of wild plants.Our system reade this text and generates a formal canonical plant description.Ultimately this will be input to a knowledse-baeed system which will then be able to answer questions on wild plants.The paper gives a limited overview of the recent work in text analysis in order to establish a context for the approach we adopt.An outline of the operation of the system is then nadeo The analysis of our text proceeds in four separate stages and these are considered in con-Junction with a sample text. The first stage at-Caches to each word in the text attributes which are held in either a keyword llst or the system dictionary. This expanded text is then split up using conjunctions, punctuation marks and the keywords in the text to assign each segment of the text to a particular part of the plant. The chard stage gathers up the descriptions for a particular part and abstracts properties from them. The final operation formats the output as required.We then look at the more detailed operation of the system in terms of specific parts of £nteresto This covers the dictionary, skeleton structures, text splitting, text analysis and the limited word guessing attempted by the system. Future developments are then considered. In particular the possibility of generalising the system to handle ocher topics. The actual implementation of the system and the use of FROLOG are examined and we conclude with some notes on the current ucillty of our system.Many research workers are interested in different aspects of text analysis. Much of the The work done by Schank (197~) and that of Sager (1981) are two contrasting examples of this interest. In addition to the research oriented work, some commercial groups are interested in the practicability of generating database input from text. its properties in a slnEle piece of text. The basic properties we are lookin 8 for -shape, colour, size -are all described by words wi~h a direct physical relation or with a simple men~al association. What we are really trying to do is tidy the description into a set of suitable noun phrases.Although the internal details of the various systems are totally different the final result is some form of layout, script or structure which has been filled out with details from the text. The approach of the various groups can be contrasted according to how much of the text is preserved at this point and how much additional detail has been added by the system. DeJong (1979) processes newswire stories and once the key elements have been found the rest of the text is abandoned. Sager makes the whole text fit into the layout as here small details ~ay be of vital importance to the end user of the processed text. Schank in his story understanding programs may actually end up with more information than the original text, supplied from the system's own world knowledge.The other contrasting factor is the degree of limitation of the domain of interest of the text processors. The more a system has been designed with a practical end in view, the more limited the domain. Schank is operatlng at the level of general language understanding. DeJong is limiting this to the task of news recognition and abstraction, but only certain stories are handled by the system. Sager has reduced the range still further to a particular type of medical diagnoses.Very recent work appears to be approaching text understanding from a word oriented viewpoint.Each word has associated with it processes which drive the analysis of the text (S~ail, 198[) . We have also been encouraged in our own approach by Kelly and Stone's (1979) work on word dlsamblguation. The implication of which seems to be that word driven rules can resolve ambiguities of meaning in a local context.Our own case is a purely practical attempt to generate large amounts of database building information from single topic texts.It should not be assumed however that a truly comprehensive syntax for a descriptive text would be simpler than for other types.The reverse may be true and the author of the descriptions may attempt to liven up his work with asides, unusual wordorders and additional atmospheric details.Our system does not use sophisticated grammarital techniques.It is our contention that in the domain of descriptive texts we can make certain assumptions about the way the descriptive data is handled.These allow very crude parsing to be sufficient in most cases.Similarly the semantic structures involved are simple. A description of an object consisting of several parts usually mentions the part and III OUTLINE OF TME SYSTEM The text analysis system has been constructed on the assumption that much of the information held in descriptive texts can be extracted using very simple rules. These rules are analogous to the "sketchy syntax" suggested by Kelly and Stone and operate on the text on a local rather than a global basis.At the time of writing our system processes plant descriptions, in search of ten properties which we consider distinctive.Examples of these properties are the size of the plant, the colour of its flowers and the shape of its flowers. New properties can be added simply by extending the skeleton plant description. McCllntock and Fitter (1974) .The system has been built to handle this topic and it attempts to fill out various properties for selected parts of a plant. A skeleton description is used to drive the processing of the text. This indicates the parts of the plant of interest and the properties required for each part.The structure which we presently use is shown in Example I after it has been filled out by processing the accompanying description.It should be noted that if the system cannot find a property then the null property "nolnfo' is returned.An outline of how a description is processed by the system and converted to canonical form is given in Figure i . There are four distinct stages in the transformation of the text. each with an attached keyword. This keyword Indentlfles the text as describing a particular part of the plant.Text segments are gathered together for a particular keyword.This may pull together text from separate parts of the original description This new unit of text is then examined to see if any of the words or phrases in it satisfy the specific property rules required for this part of the plant. If found the phrases are inserted into appropriate parts of the structure.The ultimate output of the system is intended as input to a relational database system developed at the University of Strathclyde.At the moment the structure is displayed in a form that allows checking of the system performance.A. Dictionary processor.The raw text is read in and each word in the text is checked in a dlctionary/keyword llst. Each dictionary entry has an associated list of attributes describing both syntactic and semantic attributes of that word.These attributes are looked at in more detail in section IV.If a word in the text appears in the dictionary it is supplemented with an attribute llst abstracted from the dictionary.The keywords for a text depend on which parts of the object we are interested. Thus for a plant we need to include all possible variants of flower (floret, bud) and of leaf (leaflet) and so on.Fortunately this is not a large number of words and they can be easily acquired from a thesaurus.The output from this stage is a llst of words and attached to each word is a llst of the attributes of this word.
null
null
The expanded text ks then burst into segments associated with each keyword. We identify segments by using "pivotal points" in the text. Pivotal points are pronouns, conjuntlons, prepositions and punctuation marks. This is the simplifying assumption which we make which allows us to avoid detailed grammars. The actual words and punctuation marks chosen to split the text are critical to the success of this method. It may be necessary to change these for texts by a different author as each author's usage of punctuation is fairly Idiosynchratic. Within a given work however fairly consistent results are obtained. The actual splitting of the text is covered more fully An section IV C.We now have many small segments of text IV SYSTEM DETAILSThe dictionary is the source of the meanings of words used during the search for properties. Two other word sources are incorporated in the system, a llst of keywords which is specific to the subject being described and a list of words which may be used to split the text. This second list could probably be incorporated in the dictionary, but we have avoided this until the system has been generallsed to handle other types of text.The dictionary entry for each word consists of three lists of attributes.The first contains it's part of speech, a flag indicating the word carries no semantic information and some additional attributes to control processing. For example the attribute "take-next" indicates that if a property rule is already satisfied when this word is reached in the text then the next word should be attached to the property phrase already found. Thus the word "-" carries this property and pulls in a successive word.The second llst contains attributes whose meaning would appear to be expressible as a physical measure of some kind:-"touch-roughness", "vision-intenslty".Many of the words used in descriptions can be adequately categorised by a single attribute of this type. Thus the word red is an "adjective" with a physical property "vlslon-colour".The third contains those which require physical measures to be mapped and compared to internal representations or which deal with the manipulation of internal representations alone:-"form-shape", "context-location". Words using these attributes generally tend to be more complex and may have multiple attributes. Thus the word field has as attributes "context-location" and "relaclonshlp-multlple-example" whereas the word Scotland also carries "context-location" but is qualified by "relatlonship-single-example".We realize this cLtvis£on is delimited by an extremely fuzzy border, but when the search for a basis for word definition was made chls helped the intuitive allocation of attributes. Sixty five different attributes have been allocated. Only sixteen of these are used in the rules for our current list of properties.The size of the dictionary has been considerably reduced by including the algorithm, given by Kelly and Stone (1979) , for suffix removal in the lookup process.The structure we wish to fill ouC is mapped directly to a hierarchical PROLOG structure with the uninstantiated variables, shown in the structure in capital letters, indicating where pieces of text are required. The PROLOC system fills in these variables at run time with the appropriate words from the text.Each variable in a completed structure should hold a llst of words which describe that particular property. Thus a partial plant structure is defined as:- This skeleton is accompanied by a set of keyword lists. Each llst being associated with one of the first levels of the structure. Thus a partial I/st for •flower" ~/ght be:keyword(flower,l). keyword(bud,l). keyword(pecal,l). keyword(floret,l).The number indicates which item on the first level of the structure is associated with these keywords.We assume initially that we are describing the general details of the plant, so the text read up to the first pivotal poin~ belongs to that part of our structure, keyword level O. Each subsequent piece of text found assigns to the same keyword until a piece of text is found containing • new keyword.This becomes the current keyword and following pieces of cex~ belong to this kayword until yec another keyword is found.We now gather together the pieces of text for a part of the structure and look for properties as defined An the skeleton structure.A property search is carried out for each of the property names found at level two of the strutcure. The property rules have the general form:- then exit repeat } if('property" is YES) then return words kept if('property" is NO) then return "nolnfo'.• is YES)The fundamental assumption we make for descriptions of objects is that the part described will be mentioned within the piece of cexc referring to ic. Thus conjunctions and punctuation marks are taken to flag pivotal points lo the text where attention shifts from one part to 121 E. Special Purpose RulesWe are trying to avoid rules specifically associated with layout which would need redeflnltion for different texts. However the system does assume a certain ordering in the initial title of the descriptions. Thus the name of the plant is any adjectives followed by a word or words not in the dictionary.It is intended to add rules to detect the Latin specific name of the plant. We have excluded these from our current texts. These will in all probability be based on a similar rule of ignorance, reinforced by some knowledge of permissible suffices.Certain words are identified in the dlctlonary by the attributes "take-next" and "takeprevious". They imply that if a property rule is satisfied at the time that word is processed then the successor or predecessor of that word and the word itself should be included in the property. The principal use of this occurs in hyphenated words. These are treated as three words; wordl, hyphen, word2. The hyphen carries both "takenext" and "take-previous" attributes. This often allows attachment of unknown words in a property phrase. Thus "chocolate-brown" would be recognlsed as a colour phrase despite the fact that the word chocolate is not included in the dictionary.Words which actually name the property being sought after carry a "take-previous" a~tribute. Thus "coloured" when found will pull in the previous word e.g. "butter colour" although the word butter may be unknown or have no specific dictionary attribute recognised by the rule. particular, we intend to provide a user interface to allow the system to be modified for a specific topic by user definitions and examples.The potential also exists for mapping from our word based internal representation to a more abstract machine manipulable form.This may be the most interesting direction in which the work will lead.The code for the system is written in PRO-LOG (Clocksln and Mellish, 1981) as implemented on the Edinburgh Multi Access System (Byrd,1981) . This is a standard implementation of the language, with the single enhancement of a second internal database which is accessed using a hashing algorithm rather than a linear search. This has been used to improve the efficiency of the dictionary search procedures.PROLOG was chosen as an implementation language mainly because of the ease of manipulation of structures, lists and rules. The skeleton plant and keyword lists are held as facts in the PROLOG database.The implementation of the suffix stripping algorithm is a good example of the ease of expressing algorithms in PROLOG. The mapping from the original to our code being almost one to one.In the short term, the size of the dictionary and the rules built into the system must be increased so that a higher proportion of descriptions are correctly processed. Another problem which we must handle is the use of qualifiers referring to previous descriptions e.g. 'darker green" or "much less hairy than the last species'. We intend to tackle this problem by merging the current canonical description with that of plants referred to previously It would appear from work that has been carried out on dictionary analysis (Amsler, 1981) that a less intuitive method of word meaning categorization may be available. If it proves possible to ~ap from a standard dictionary to our set of attributes or some related set then the rigour of out internal dictionary would be significantly improved and a major area of repetitive work might be removed from the system. It is also intended to extend the suffix algorithm to handle prefixes and to convert the part of speech attribute according to the transformations carried out on the word. This has not proved important to us up to the present but future uses of the dictionary may depend on its being handled correctly.In the longer term we intend to generallse the system to cope with other topic areas.InIn addition the implementation on EMAS allows large PROLOG programs to be run. The interpretive nature of the language also means that trace debugging facilities are available and new pieces of code can be easily incorporated into the system.Initial indications suggest that for about 50% of descriptions, all ten properties are correctly evaluated and for about 30%, 8 or 9 properties are correct.The remaining 20% are unacceptable as less than 8 properties are correctly determined by the system. We anticipate that increasing the knowledge base of the system will significantly increase its accuracy.The very primitive "sketchy syntax" approach appears to offer practical solutions in analysing descriptive texts. Furthermore, there seems to be no intrinsic reason why a similar method could not be used to analyse temporal or causal structures.There will always be segments of text that the system cannot cope with and to achieve a greater degree of accuracy we will need to allow the system to consult with the user in resolving difficult pieces of text.
null
Main paper: text splitting.: The expanded text ks then burst into segments associated with each keyword. We identify segments by using "pivotal points" in the text. Pivotal points are pronouns, conjuntlons, prepositions and punctuation marks. This is the simplifying assumption which we make which allows us to avoid detailed grammars. The actual words and punctuation marks chosen to split the text are critical to the success of this method. It may be necessary to change these for texts by a different author as each author's usage of punctuation is fairly Idiosynchratic. Within a given work however fairly consistent results are obtained. The actual splitting of the text is covered more fully An section IV C.We now have many small segments of text IV SYSTEM DETAILSThe dictionary is the source of the meanings of words used during the search for properties. Two other word sources are incorporated in the system, a llst of keywords which is specific to the subject being described and a list of words which may be used to split the text. This second list could probably be incorporated in the dictionary, but we have avoided this until the system has been generallsed to handle other types of text.The dictionary entry for each word consists of three lists of attributes.The first contains it's part of speech, a flag indicating the word carries no semantic information and some additional attributes to control processing. For example the attribute "take-next" indicates that if a property rule is already satisfied when this word is reached in the text then the next word should be attached to the property phrase already found. Thus the word "-" carries this property and pulls in a successive word.The second llst contains attributes whose meaning would appear to be expressible as a physical measure of some kind:-"touch-roughness", "vision-intenslty".Many of the words used in descriptions can be adequately categorised by a single attribute of this type. Thus the word red is an "adjective" with a physical property "vlslon-colour".The third contains those which require physical measures to be mapped and compared to internal representations or which deal with the manipulation of internal representations alone:-"form-shape", "context-location". Words using these attributes generally tend to be more complex and may have multiple attributes. Thus the word field has as attributes "context-location" and "relaclonshlp-multlple-example" whereas the word Scotland also carries "context-location" but is qualified by "relatlonship-single-example".We realize this cLtvis£on is delimited by an extremely fuzzy border, but when the search for a basis for word definition was made chls helped the intuitive allocation of attributes. Sixty five different attributes have been allocated. Only sixteen of these are used in the rules for our current list of properties.The size of the dictionary has been considerably reduced by including the algorithm, given by Kelly and Stone (1979) , for suffix removal in the lookup process.The structure we wish to fill ouC is mapped directly to a hierarchical PROLOG structure with the uninstantiated variables, shown in the structure in capital letters, indicating where pieces of text are required. The PROLOC system fills in these variables at run time with the appropriate words from the text.Each variable in a completed structure should hold a llst of words which describe that particular property. Thus a partial plant structure is defined as:- This skeleton is accompanied by a set of keyword lists. Each llst being associated with one of the first levels of the structure. Thus a partial I/st for •flower" ~/ght be:keyword(flower,l). keyword(bud,l). keyword(pecal,l). keyword(floret,l).The number indicates which item on the first level of the structure is associated with these keywords.We assume initially that we are describing the general details of the plant, so the text read up to the first pivotal poin~ belongs to that part of our structure, keyword level O. Each subsequent piece of text found assigns to the same keyword until a piece of text is found containing • new keyword.This becomes the current keyword and following pieces of cex~ belong to this kayword until yec another keyword is found.We now gather together the pieces of text for a part of the structure and look for properties as defined An the skeleton structure.A property search is carried out for each of the property names found at level two of the strutcure. The property rules have the general form:- then exit repeat } if('property" is YES) then return words kept if('property" is NO) then return "nolnfo'.• is YES)The fundamental assumption we make for descriptions of objects is that the part described will be mentioned within the piece of cexc referring to ic. Thus conjunctions and punctuation marks are taken to flag pivotal points lo the text where attention shifts from one part to 121 E. Special Purpose RulesWe are trying to avoid rules specifically associated with layout which would need redeflnltion for different texts. However the system does assume a certain ordering in the initial title of the descriptions. Thus the name of the plant is any adjectives followed by a word or words not in the dictionary.It is intended to add rules to detect the Latin specific name of the plant. We have excluded these from our current texts. These will in all probability be based on a similar rule of ignorance, reinforced by some knowledge of permissible suffices.Certain words are identified in the dlctlonary by the attributes "take-next" and "takeprevious". They imply that if a property rule is satisfied at the time that word is processed then the successor or predecessor of that word and the word itself should be included in the property. The principal use of this occurs in hyphenated words. These are treated as three words; wordl, hyphen, word2. The hyphen carries both "takenext" and "take-previous" attributes. This often allows attachment of unknown words in a property phrase. Thus "chocolate-brown" would be recognlsed as a colour phrase despite the fact that the word chocolate is not included in the dictionary.Words which actually name the property being sought after carry a "take-previous" a~tribute. Thus "coloured" when found will pull in the previous word e.g. "butter colour" although the word butter may be unknown or have no specific dictionary attribute recognised by the rule. particular, we intend to provide a user interface to allow the system to be modified for a specific topic by user definitions and examples.The potential also exists for mapping from our word based internal representation to a more abstract machine manipulable form.This may be the most interesting direction in which the work will lead.The code for the system is written in PRO-LOG (Clocksln and Mellish, 1981) as implemented on the Edinburgh Multi Access System (Byrd,1981) . This is a standard implementation of the language, with the single enhancement of a second internal database which is accessed using a hashing algorithm rather than a linear search. This has been used to improve the efficiency of the dictionary search procedures.PROLOG was chosen as an implementation language mainly because of the ease of manipulation of structures, lists and rules. The skeleton plant and keyword lists are held as facts in the PROLOG database.The implementation of the suffix stripping algorithm is a good example of the ease of expressing algorithms in PROLOG. The mapping from the original to our code being almost one to one.In the short term, the size of the dictionary and the rules built into the system must be increased so that a higher proportion of descriptions are correctly processed. Another problem which we must handle is the use of qualifiers referring to previous descriptions e.g. 'darker green" or "much less hairy than the last species'. We intend to tackle this problem by merging the current canonical description with that of plants referred to previously It would appear from work that has been carried out on dictionary analysis (Amsler, 1981) that a less intuitive method of word meaning categorization may be available. If it proves possible to ~ap from a standard dictionary to our set of attributes or some related set then the rigour of out internal dictionary would be significantly improved and a major area of repetitive work might be removed from the system. It is also intended to extend the suffix algorithm to handle prefixes and to convert the part of speech attribute according to the transformations carried out on the word. This has not proved important to us up to the present but future uses of the dictionary may depend on its being handled correctly.In the longer term we intend to generallse the system to cope with other topic areas.InIn addition the implementation on EMAS allows large PROLOG programs to be run. The interpretive nature of the language also means that trace debugging facilities are available and new pieces of code can be easily incorporated into the system.Initial indications suggest that for about 50% of descriptions, all ten properties are correctly evaluated and for about 30%, 8 or 9 properties are correct.The remaining 20% are unacceptable as less than 8 properties are correctly determined by the system. We anticipate that increasing the knowledge base of the system will significantly increase its accuracy.The very primitive "sketchy syntax" approach appears to offer practical solutions in analysing descriptive texts. Furthermore, there seems to be no intrinsic reason why a similar method could not be used to analyse temporal or causal structures.There will always be segments of text that the system cannot cope with and to achieve a greater degree of accuracy we will need to allow the system to consult with the user in resolving difficult pieces of text. : The texts currently used are wild plant descriptions taken directly from a popular book on the subject. Properties such as size, shape and colour are abstracted from the descriptions and related to parts of the plant in which we are interested.The resulting output is a standardined hierarchical structure holding only significant features of the description.The system, implemented in the PROLOG programming language, uses keywords co identify the way segments of the text relate to the object described.Information on words is held in a keyword list of nouns relating to parts of the object described. A dictionary contains the attributes of ordinary words used by the system to analyse the text.The text is divided into seE" ments using information provided by conjunctions and punctuation.About half the texts processed are correctly analysed at present. Proposals are made for future work to improve this figure.There seems Co be no inherent reason why the technique cannot be generalised so chac any text of seml-standard descriptions can be automatically converted to a canonical form.A lot of useful information, covering many subject areas, is presently available in printed form in catalogues, directories and guides. Good" examples are plants in "Collins Pocket Guide to Wild Flowers", aeroplanes in "Jane's All the World's Aircraft" and people in '~ho's Who". Because chls informaClon is represented in a stylised form, it is amenable CO machine processing Co abstract salient details concerning the entity being described.The research described here is part of a long term project to develop a system which can "read" descriptive text and so become an expert on the -~terial which has been read.The first stage of this research is to establish that it is indeed possible co abstract useful information from descriptive text and we have chosen as a typical example a text consisting of descriptions of wild plants.Our system reade this text and generates a formal canonical plant description.Ultimately this will be input to a knowledse-baeed system which will then be able to answer questions on wild plants.The paper gives a limited overview of the recent work in text analysis in order to establish a context for the approach we adopt.An outline of the operation of the system is then nadeo The analysis of our text proceeds in four separate stages and these are considered in con-Junction with a sample text. The first stage at-Caches to each word in the text attributes which are held in either a keyword llst or the system dictionary. This expanded text is then split up using conjunctions, punctuation marks and the keywords in the text to assign each segment of the text to a particular part of the plant. The chard stage gathers up the descriptions for a particular part and abstracts properties from them. The final operation formats the output as required.We then look at the more detailed operation of the system in terms of specific parts of £nteresto This covers the dictionary, skeleton structures, text splitting, text analysis and the limited word guessing attempted by the system. Future developments are then considered. In particular the possibility of generalising the system to handle ocher topics. The actual implementation of the system and the use of FROLOG are examined and we conclude with some notes on the current ucillty of our system.Many research workers are interested in different aspects of text analysis. Much of the The work done by Schank (197~) and that of Sager (1981) are two contrasting examples of this interest. In addition to the research oriented work, some commercial groups are interested in the practicability of generating database input from text. its properties in a slnEle piece of text. The basic properties we are lookin 8 for -shape, colour, size -are all described by words wi~h a direct physical relation or with a simple men~al association. What we are really trying to do is tidy the description into a set of suitable noun phrases.Although the internal details of the various systems are totally different the final result is some form of layout, script or structure which has been filled out with details from the text. The approach of the various groups can be contrasted according to how much of the text is preserved at this point and how much additional detail has been added by the system. DeJong (1979) processes newswire stories and once the key elements have been found the rest of the text is abandoned. Sager makes the whole text fit into the layout as here small details ~ay be of vital importance to the end user of the processed text. Schank in his story understanding programs may actually end up with more information than the original text, supplied from the system's own world knowledge.The other contrasting factor is the degree of limitation of the domain of interest of the text processors. The more a system has been designed with a practical end in view, the more limited the domain. Schank is operatlng at the level of general language understanding. DeJong is limiting this to the task of news recognition and abstraction, but only certain stories are handled by the system. Sager has reduced the range still further to a particular type of medical diagnoses.Very recent work appears to be approaching text understanding from a word oriented viewpoint.Each word has associated with it processes which drive the analysis of the text (S~ail, 198[) . We have also been encouraged in our own approach by Kelly and Stone's (1979) work on word dlsamblguation. The implication of which seems to be that word driven rules can resolve ambiguities of meaning in a local context.Our own case is a purely practical attempt to generate large amounts of database building information from single topic texts.It should not be assumed however that a truly comprehensive syntax for a descriptive text would be simpler than for other types.The reverse may be true and the author of the descriptions may attempt to liven up his work with asides, unusual wordorders and additional atmospheric details.Our system does not use sophisticated grammarital techniques.It is our contention that in the domain of descriptive texts we can make certain assumptions about the way the descriptive data is handled.These allow very crude parsing to be sufficient in most cases.Similarly the semantic structures involved are simple. A description of an object consisting of several parts usually mentions the part and III OUTLINE OF TME SYSTEM The text analysis system has been constructed on the assumption that much of the information held in descriptive texts can be extracted using very simple rules. These rules are analogous to the "sketchy syntax" suggested by Kelly and Stone and operate on the text on a local rather than a global basis.At the time of writing our system processes plant descriptions, in search of ten properties which we consider distinctive.Examples of these properties are the size of the plant, the colour of its flowers and the shape of its flowers. New properties can be added simply by extending the skeleton plant description. McCllntock and Fitter (1974) .The system has been built to handle this topic and it attempts to fill out various properties for selected parts of a plant. A skeleton description is used to drive the processing of the text. This indicates the parts of the plant of interest and the properties required for each part.The structure which we presently use is shown in Example I after it has been filled out by processing the accompanying description.It should be noted that if the system cannot find a property then the null property "nolnfo' is returned.An outline of how a description is processed by the system and converted to canonical form is given in Figure i . There are four distinct stages in the transformation of the text. each with an attached keyword. This keyword Indentlfles the text as describing a particular part of the plant.Text segments are gathered together for a particular keyword.This may pull together text from separate parts of the original description This new unit of text is then examined to see if any of the words or phrases in it satisfy the specific property rules required for this part of the plant. If found the phrases are inserted into appropriate parts of the structure.The ultimate output of the system is intended as input to a relational database system developed at the University of Strathclyde.At the moment the structure is displayed in a form that allows checking of the system performance.A. Dictionary processor.The raw text is read in and each word in the text is checked in a dlctionary/keyword llst. Each dictionary entry has an associated list of attributes describing both syntactic and semantic attributes of that word.These attributes are looked at in more detail in section IV.If a word in the text appears in the dictionary it is supplemented with an attribute llst abstracted from the dictionary.The keywords for a text depend on which parts of the object we are interested. Thus for a plant we need to include all possible variants of flower (floret, bud) and of leaf (leaflet) and so on.Fortunately this is not a large number of words and they can be easily acquired from a thesaurus.The output from this stage is a llst of words and attached to each word is a llst of the attributes of this word. Appendix:
null
null
null
null
{ "paperhash": [ "amsler|a_taxonomy_for_english_nouns_and_verbs", "buchanan|review_of_\"syntactic_methods_in_pattern_recognition_by_k._s._fu\",_academic_press,_ny", "mcclintock|the_pocket_guide_to_wild_flowers" ], "title": [ "A Taxonomy for English Nouns and Verbs", "Review of \"Syntactic Methods in Pattern Recognition by K. S. Fu\", Academic Press, NY", "The pocket guide to wild flowers" ], "abstract": [ "The definition texts of a machine-readable pocket dictionary were analyzed to determine the disambiguated word sense of the kernel terms of each word sense being defined. The resultant sets of word pairs of defined and defining words were then computationally connected into two taxonomic semilattices (\"tangled hierarchies\") representing some 24,000 noun nodes and 11,000 verb nodes. The study of the nature of the \"topmost\" nodes in these hierarchies, and the structure of the trees reveal information about the nature of the dictionary's organization of the language, the concept of semantic primitives and other aspects of lexical semantics. The data proves that the dictionary offers a fundamentally consistent description of word meaning and may provide the basis for future research and applications in computational linguistic systems.", "This book is an integrated sequence of papers Professor Michie has published, for a general audience, over the last decade or so. It contains the following: Introduction Trial and Error (Science Survey, 1961) Puzzle-learning versus Game-learning (The Scientist Speculates, 1962) Game-playing Automata (Advances in Programming and Non-Numerical Computation, 1966) Machines that Play and Plan (Science Journal, 1968) Computer servant or master (Theoria to Theory, 1968) Integrated Cognitive Systems (Nature, 1970) Tokyo-Edinburgh Dialogue (The Computer Journal, 1971) Artificial Intelligence (New Society, 1971) On not seeing things (Experimental Programming Report, 1971) Programmer's Gambit (New Scientist, 1972) Machine Intelligence at Edinburgh (Management Informatics, 1973) Theory of Intelligence (Nature, 1973) Memory Mechanisms and Learning (Simple Nervous Systems, 1974) Knowledge Engineering (Kybernetics, 1973) Maching Intelligence as Technology (Proceedings of the Conference on Shop Floor Automation, 1973) 7. The Structure of Belief Systems -Robert P. Abelson", "The pocket guide to wild flowers , The pocket guide to wild flowers , مرکز فناوری اطلاعات و اطلاع رسانی کشاورزی" ], "authors": [ { "name": [ "R. A. Amsler" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Jack Buchanan" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. McClintock", "R. Fitter", "Francis Rose" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null ], "s2_corpus_id": [ "18782721", "195349308", "82206852" ], "intents": [ [ "background" ], [], [] ], "isInfluential": [ false, false, false ] }
- Problem: The paper aims to develop a system that can interpret descriptive texts without complex grammars, specifically focusing on wild plant descriptions from a popular book. - Solution: The system, implemented in PROLOG, uses keywords to identify text segments related to the plant parts described, abstracting properties like size, shape, and color to create a standardized hierarchical structure for database use. The hypothesis is that this system can automatically convert semi-standard descriptions into a canonical form for various topics.
504
0.05754
null
null
null
null
null
null
null
null
f31b8e92076258b3983a8b6d062f6fd9302bac5c
7066904
null
A Status Report on the {LRC} Machine
This paper discusses the linguistic and computational techniques employed in the current version of Machine Translation system being developed at the Linguistics Research Center of the University of Texas, under contract to Siemens AG in Munich, West Germany. We pay particular attention to the reasons for our choice of certain techniques over other candidates, based on both objective and subjective criteria. We then report the system's status vis-a-vis its readiness for application in a production environment, as a means of justifying our claims regarding the practical utility of the methods we espouse.
{ "name": [ "Slocum, Jonathan" ], "affiliation": [ null ] }
null
null
First Conference on Applied Natural Language Processing
1983-02-01
30
9
null
The LRC MT system is one of very few large-scale applications of modern computational linguistics techniques [Lehmann, 1981] . Although the LRC MT system is nearing the status of a production system (a version should be delivered to the project sponsor about the time this conference takes place), it is not at all static; rather, it is an evolving collection of techniques which are continually tested through application to moderately large technical manuals ranging from 50 to 200 pages in length.Thus, our "applied" system remains a research vehicle that serves as an excellent testbed for proposed new procedures.In general, the criteria for our choice of linguistic and computational techniques are three: effectiveness, convenience of use, and efficiency. These criteria are applied in a context where the production of an MT system to be operational in the near-term future is of critical concern. Candidate techniques which do not admit near-term, large-scale application thus suffer an overwhelmins disadvantage. The questions confronting us are, then, twofold: (I) which techniques admit such application; and (2) which of these best satisfy our three general criteria?The first question is usually answered through an evaluation of the likely difficulties and requirements for implementation;the second, through empirical results in the course of experiments.Our evaluation of the LRC ~E system's current status will be based on three points! (a) the system's provision of all the tools necessary for users to effect the complete translation process (including text processing, editing, terminology meinten~ok-up, etc.); (b) quantitae., throughput on a particul.(c) qualitative PerformS'known about overall performliveness (i.e., the number c o be] supported by a single "• , m, cheer [expected] cnrougnp any other personnel necessar-to-day operation of the sySted ] overall costs of translat the norm experienced in human-fine I numbers" will not be a of the conference. but the ninary experiments by our sponthus some reasonable projecti,Our ~een "linguistic ceohniqu%ional techniques" (dlscussqection) is somewhat artificihalidity in a broad sense, aSfrom an overview of the point s section we present the reas of the following linguistia phrase-structure grammar; ures;(c) semantic features;erpretations;(e) transformecific rules; (f) a transfer:Cached procedures to effect ttIn ,e employ a phrase-structur~Y sufficient lexical controls ~ lexical-funccional grammar all our linguistic decision most controversial, and cons the most attention. Generall are two competing claims:rules per se are inadequ~-S., [Cullingford, 1978] );~r forms of gr~ummar (ATNs [~rmational [Petrick, 1973], ; 1972] , word-experts [Small,~rior. We will deal with th.Th~ght that claim that syntax ~propriate models of languas according to this notion, be treated [almost] entirely on the basis of semantics, guided by a strong underlying model of the current situational context, and the expectations that may be derived therefrom. We cannot argue against the claim chat semantics is of critical concern in Natural Language Processing.However, as yet no strong case has been advanced for the abandonment of syntax. Moreover, no system has been deleloped by any of the adherents of the "semantics only" school of thought that has more-or-less successfully dealt with ALL of a vide range --or at least large volume --of ~aterial.A more damaging argument against this school is that every NLP system to date that HAS been applied Co large volumes of text (in the attempt to process ALL of it soma significant sense) has been based on a strong syntactic model of language (see, e.g., [Boater et el., 1980b] , [Damerau, 1981] . [Hendrix et el., 1978] , [Lehmann et el,, 1981] , [Martin et el., 1981] , [Robinson. 1982] , and [Sager. 1981] ).There are other schools of thought that hold phrase-structure (PS) rules in disrespect, while admitting the utility (necessity) of syntax. It is claimed that the phrase-structure formalism is inadequate, and that other forms of gr----r are necessary.(This has been a long-standing position in the linguistic community, being upheld there before most computational linguists jumped on the bandwagon; ironically, this position is nov being challenged by some within the linguistic community itself, who are once again supporting PS rules as a model of natural language use [Gazdar, 1981] .)The anti-PS positions in the NLP c~umiCy are all, of necessity, based on practical considerations, since the models advanced to replace PS rules are formally equivalent in generative power (assuming the PS rules to be augmented, which is always the case in modern NLP systems employing them).But cascaded ATNs [Woods, 1980] , for example, are only marginally different from PS rule systems. It is curious to note that only one of the remaining contenders (s transformational gr--,--r [Damerau, 1981]) has been demonstrated in large-scale application --and even this system employs PS rules in the initial stages of parsing. Other formal systems (e.g., procedural gr=mm-rs [Winograd, 1972] ) have been applied to semantically deep (but linguistically Lmpoverished) domains --or Co excessively lJJniced domains (e.g., Smell's [1980] "word expert" parser seems to have encompassed a vocabulary of less than 20 items).For practical application, it is necessary thaca system be able co accumulate grammar rules, and especially lexical items, aca prodigious rate by current NLP standards. The formalisms competing with PS rules and dictionary entries of modest size seem to be universally characterizable as requiring enormous human resources for their implementation in even a moderately large environment.This should not be surprising: it is precisely the claim of these competing methodologies (chose Chac are ocher than slight variations on PS rules) that language is an exceedingly complex phenomenon, requiring 167 correspondingly complex techniques to model. For "deep understanding" applications, we do noc contest this claim.But we do maintain chat there are some applications that do not seem to require this level of effort for adequate results in a practical setting.Our particular application -automated translation of technical texts --seems CO fell in ChiJ category.The LRC )iT system is currently equipped with something over 400 PS rules describing the Source Language (German), and nearly 10,000 lexical entries in each of two languages (German and the Target Language --English).The current state of our coverage of the SL is that the system is able to parse and acceptably translate the majority of sentences in previously-unseen texts, within the subject areas bounded by our dictionary (specific figures will be related below).By the time this conference convenes, we will have begun the process of adding to the system an analysis grammar of the current TL (English), so that the direction of translation may be reversed; we anticipate bringing the English grammar up to the level of the German gr2m~ar in about a year's time.Our expectations for eventual coverage a~e that around 1,000 PS rules will he adequate co account for almost all sentence forms actually encountered in technical texts, whatever the language.We do not feel constrained to account for every possible sentence form in such texts -nor for sentence forms not found in such texts (as in the case of poetry) --since the required effort would not be cost-effective whether measured in financial or human terms, even if it were possible using current techniques (which ve doubt).Our use of syntactic features is relatively noncontroversial, given our choice of the PS rule formalism.We employ syntactic features for two purposes.One is the usual practice of using such features to restrict the application of PS rules (e.g., by enforcing subject-verb number agreement).The other use is perhaps peculiar to our type of application:once an analysis is achieved, certain syntactic features are employed to control the course (and outcome) of translation --i.e., generation of the TL sentence. The "augmentations" to our PS rules include procedures written in a formal language (so that our linguists do not have Co learn LISP) that manipulate features by restricting their presence. their values if present, etc., and by moving them from node to node in the "parse tree" during the course of the analysis.As is the case with other researchers employing such techniques, we have found this to be an extremely powerful (and of course necessary) means of restricting the activities of the parser.We employ simple semantic features, as opposed to complex models of the domain. Our reasons are primarily practical.First, they seem sufficient for at least the initial stage of our application.Second, the thought of writing complex models of even one complete technical domain is staggering: the operation and maintenance manuals we ar e currently working with (describing a digital telephone switching system) are part of a document collection that is expected to comprise some 100,000 pages of text when complete.A research group the size of ours would not even be able to read that volume of material, much less write the "necessary" semantic models subsumed by it, in any reasonable amount of time. (The group would also have to become electronics engineers, in all likelihood.)If such models are indeed required for our application, we will never succeed.As it turns out, we are doing surprisingly well without such models.In fact, our semantic feature system is not yet being employed to restrict the analysis effort at all; instead, it is used at "transfer time" (described later) to improve the quality of the translations, primarily of prepositions.We look forward to extending the use of semantic features to other parts of speech, and to substantive activity during analysis; but even we were pleased at the results we achieved using only syntactic features.It is a well-known fact that NLP systems tend to produce many readings of their input sentences (unless, of course, constrained to produce the first reading only --which can result in the "right" interpretation being overlooked). The LRC MT system produces all interpretations of the input "sentence" and assigns each of them a score, or plausibility factor [Robinson, 1982] . This technique can be used, in theory, to select a "best" interpretation from the possible readings of an ambiguous sentence.We base our scores on both lexical and grammatical phenomena --plus the types of any spelling/typographical errors, which can sometimes be "corrected" in more than one way.Our experiences relating to the reliability and stability of heuristics based on this technique are decidedly positive: we employ only the (or a) highest-scoring reading for translation (the others being discarded), and our informal experiments indicate that it is very rarely true that a better translation results from a lower-scoring analysis.(Surprisingly often, a number of the higher-scoring interpretations will be translated identically. But poorer translations are frequently seen from the lower-scoring interpretations, demonstrating that the technique is indeed effective.) to syntactic constructs.(Actually, both styles are available, but our linguists have never seen the need or practicality of employing the openended variety).It is clearly more efficient to index tranoformations to specif ic rules when possible; the import of our findings is that it seems to be unnecessary to have open-ended transformations --even during analysis, when one might intuitively expect them to be useful.is frequently argued that translation should be a process of analyzing the Source Language (SL) into a "deep representation" of some sort, then directly synthesizing the Target Language (TL) (e.g., [Carbonnel, 1978] ). We and others [King, 1981] contest this claim -especially with regard to "similar languages" (e.g., those in the Indo-European family). One objection is based on large-scale, long-term trials of the "deep representation" (in MT. called the "pivot language") technique by the MT group at Grenoble [Boitet, 1980a] .After an enormous investment in time and energy, includin E experiments with massive amounts of text, it was decided that the development of a suitable pivot language (for use in Russian-French translation) was probably impossible. Another objection is based on practical considerations: since it is not likely that any NLP system will in the foreseeable future become capable of handling unrestricted input --even in the technical area(s) for which it might be designed --it is clear that a "fail-soft" technique is necessary. It is not obvious that such is possible in a system based solely on a pivot language;a hybrid system capable of dealing with shallower levels of understanding is necessary in a practical setting. This being the case, it seems better in near-term applications to start off with a system employing a "shallow" but usable level of analysis, and deepen the level of analysis as experience dictates, and resources permit.Our alternative is to have a "transfer" component which maps "shallow analyses of sentences" in the SL into "shallow analyses of equivalent sentences" in the TL, from which synthesis then takes place. While we and the rest of the NLP community continue to debate the nature of an adequate pivot language (i.e., the nature of deep semantic models and the processing they entail), we can hopefully proceed to construct a usable system capable of progressive enhancement as linguistic theory becomes able to support deeper models.We employ a transformational component, during both the analysis phase and the translation phase.The transformations, however, are indexed to specific syntax rules rather than loosely keyed Our Transfer procedures (which effect the actual translation of SL into TL) are tightly hound to nodes in the analysis (parse tree) structure [Paxton, 1977] . They are, in effect, suspended procedures --the same procedures that constructed the corresponding parse tree nodes to begin with. This is to be preferred over a more general, loose association based on syntactic constructs because, aside from its advantage in sheer computational efficiency, it eliminates the possibility that the '~rong" procedure can be applied to a construct.The only real argument against this technique, as we see it, is based on space considerations: to the extent that different constructs share the same transfer operations, replication of the procedures chat /~plemenc said operations (and editing effort to modify them) is possible.We have not noticed this to be a problem.For a while, our system load-up procedure searched for duplicates of this nature and eliminated them; however, the gains turned out to be minimal N different constructs typically do require different operetions.llI COMPUTATIONAL TECHNIQUES ~qFLOYED Again, our separation of "linguistic" from "computational" techniques is somewhat artificial. but nevertheless useful.In this section we present the reasons for our use of the following cmaputation~l techniques: It also received our greatest experimental scrutiny. We have collected • substantial body of empirical evidence relating Co parsing techniques. Since the evidence and conclusions require lengthy discussion, and are presented elsewhere [Slocum, 1981] , we will only briefly s~rize the results. The evidence indicates that our use of an all-paths bottom-up parser is justified, given the current state of the art in Computational Linguistics.Our reasons are the following: first, the dreaded "exponential explosion" of processing time has not appeared (and our sr---~=r and test texts are among the largest in the world), but instead, processing time appears Co be linear with sentence length --even though our system produces all possible interpretations; second, top-down parsing methods suffer inherent disadvantages in efficiency, and bottom-up parsers can be and have been augmented with "top-down filtering" to restrict the syntax rules applied to those that an all-paths top-down parser would apply;third, it is difficult to persuade a top-down parser to continue the analysis effort to the end of the sentence, when it blocks somewhere in the middle --which makes the implementation of "fail-soft" techniques that much more difficult;and lastly, the lack of any strong notion of how to construct a "best-path" parser, coupled with the raw speed of well-unplemented parsers, implies that an all-paths parser which scores interpretations and can continue the analysis to the end of the sentence is best in a practical application such as ours.We associate a procedure directly with each individual syntax rule, and evaluate it as soon as the parser determines the rule to be (seemingly) applicable [Pratt, 1973; Hendrix, 1978 features/values of nodes in the tree --i.e., no knovledse of LISP is necessary to code effective procedures. Since these procedures are compiled into LISP, all the power of LISP is available as necessary. The chief linguist on our project, who has • vague knowledge of LISP, has employed OR and AND operators to a significant extent (we didn't bother to include them in the specifications of the formal language, though we obviously could have), and on rare occasions has resorted to using COND. No other calls to true LISP functions (as opposed to our formal operators, which are few and typically quite primitive) have seemed necessary. nor has this capability been requested, to date. The power of our rule-body procedures seems to lie in the choice of features/values that decorate the nodes, rather than the processin E capabilities of the procedures themselves.There are limitations and dansers to spelling correction in general, but we have found it to be an indispensable component of an applied system.People do make spelling and typographical errors, as is well known; even in "polished" documents they appear with surprising frequency (about every other PaSe, in our experience).Arguments by LISP programmers (re: INTERLISP's DWIM) aside, users of applied NLP systems distinctly dislike being confronted with requests for clarification --or, worse, unnecessary ~ailure --in lieu of automated spelling correction. Spelling correction. therefore, is necessary.Luckily, almost all such errors are treatable with simple techniques: single-letter additions, omissions, and mistakes, plus two-or three-letter transpositions account for almost all mistakes. Unfortunately, itis not infrequently the case that there is more than one way to "correct" a mistake (i.e., resulting in different corrected versions).Even a human cannot always determine the correct form in isolation, and for NLP systems it is even more difficult.There is yet another problem with automatic spelling correction: how much to correct. Given unlimited rein, any word can be "corrected"Co any other. Clearly there must be limits, but what are they?Our informal findings concerning how much one may safely "correct" in an application such as ours are these: the few errors chat simple techniques ha~e not handled are almost always bizarre (e.g., repeated syllables or larger portions of words) or highly unusual (e.g., blanks inserted within words); correction of more than a one error in a word is dangerous (it is better to treat the word as unknown, hence a noun); and "correction" of errors which have converted one word into another (valid in isolation)should not be tried.In the event of failure to achieve a comprehensive analysis of the sentence, a system such as ours --which is to be applied to hundreds of thousands of pages of text --cannot indulge in the luxury of simply replying with an error message statingChat the sentence cannot be interpreted.Such behavior is a significant problem, one which the NLP commuuity has failed to come to grips with in any coherent fashion.There have, at least, been some forays.Weishedel and Black [1980] discusa techniques for interacting with the linguist/developer to identify insufficiencies in the grammar. This is fine for development purposes. But, of course, in an applied system the user will be neither the developer nor a linguist, so this approach has no value in the field.Rayes and Mouradlan [1981] discuss ways of allowing the parser to cope with ungr-----tical utterances; this work is in its infancy, but it is stimulating nonetheless. We look forwardto experimenting with similar techniques in our system.What we require now, however, is a means of dealing with "ungrammatical" input (whether through the human's error or the shortcomings of our own rules) that is highly efficient, sufficiently general to account for a large, unknown range of such errors on its first outing, and which can be implemented in a short period of time.We found just such a technique three years ago: a special procedure (invoked when the analysis effort has been carried through to the end of the sentence) searches through the parser's chart to find the shortest path from one end to the other; this path represents the fewest, longest-spanning phrases which were constructed durinE the analysis.Ties are broken by use of the standard scoring mechanism that prwides each phrase in the analysis with a score, or plausibility measure (discussed earlier). We call this procedure "phrasal analysis'. To our knowledge, no other NLP system relies on a such a general technique for searching the parser's chart when an analysis effort has failed.We think that phrasal analysis --which is simple and independent of both language and grammar --could be useful in ocher applications of NLP technology, such as natural language interfaces to databases.Few ~LP systems have dealt with parenthetical expressions;but MT researchers know well that these constructs appear in abundance in technical texts.We deal with this phenomenon in the following way: rather than treating parentheses as lexical items, we make use of LISP's natural treatment of them as list delimiters, and treat the resulting subliats as individual "words" in the sentence; these '~ords" are "lexically analyzed" via recursive calls to the parser. Aside from the elegance of the treatment, this has the advantage that "ungra---atical" parenthetical expressions may undergo phrasal analysis and thus become single-phrase entities as far as the analysis of the encompassing sentence is concerned;thus, ungr----atical parenthetical expressions need not result in ungrammatical (hence poorly handled) sentences.No NLP system is likely to to be successful in isolation:an enviro,--ent of support tools is necessary for ultimate acceptance on the part of prospective users.The following support tools, we think, constitute a minimum workable enviro,--ent for both development and use: a DBMS for handling lexical entries; validation programs that verify the admissability of all linguistic rules (gr---.ar, lexicons, transformations, etc.) accordin E to a set of formal specifications; dictionary programs that search through large numbers of proposed new lexical entries (words, in all relevant languages) to determine which entries are actually new, and which appear to replicate existing entries; defaulting programs that "code" new lexical entries in the NLP system's chosen formalism automatically, given only the root forms of the words and their categories, using empirically determined best guesses based on the available dictionary database entries plus whatever orthographic information is available in the root forms; and benchmark programs to test the integrity of the NLP system after significant modifications [Slocum, 1982] . A DB}~ for handling grammar rules is also a good idea. paragraphs, multi-column tables, flowcharts, figure labels, and the like; a powerful on-line editing program with special capabilities (such as single-keystroke commands to look up words in on-line dictionaries) in addition to the normal editing commands (almost all of which should be invokable with a single keystroke); and also, perhaps, (assess to) a "term databank," i.e., an on-line database of technical terms used in the subject area(s) to be covered by the ~ system. The LRC MT system already provides all of the tools mentioned above, with the exception of the text editor and terminology database (both of which our sponsor viii provide).All of this comes in a single intngraCed working enviro-~ent, so that our linguists and lexicographers can implement changes and test them i~nediataly for their effects on translation quality, and modify or delete their additions with ease, if desired.The average performance of the LRC MT system when translating technical manuals from German into English, runnin S in compiled INTERLISP on a DEC 2060 with over a million words of physical m~ory, has been measured at slightly under 2 seconds of CPU time per input word; this includes storage management (the garbage collector alone cousmes 45Z of all CPU time on this limited-address-space machine), paging, swapping, and I/0 ~ that is, all forum of overhead. Our experience on the 2060 involved the translation of some 330 pages of text, in three segments, over a two year period.On our Symbolica LM-2 Lisp Machine. with 256K vords of physical memory, preliminary measurements indicate an average prefornance of 6-10 seconds (real time) per input word, likewise including all forms of overhead.Our LM-2 experience to date has involved the translation of about 200 pages of text in a single run.The paging rate indicates that, with added memory (512K words is "standard" on these machines), we could expect a significant reduction in this performance figure. With a faster, second-generation Lisp Machine, we would expect a more substantial reduction of real-time processing requirements. We hope to have had the opportunity to conduct an experiment on at least one such machine, by the time this conference convenes.Measuring MT system throughput is one thin S. Measuring "machine translation quality" is quite another, since the standards for measurement (and for interpretin S the measurements) are little understood, and vary widely. Thus, "quality" measurements are of little validity, However, because there is usually a considerable amount of lay interest in such n~--bers, we shall endeavor to indicate why they are basically meaningless, and then report our findings for the benefit of those who feel a need to know.Certainly it is the case that "correctness" numbers can theoreticallygive some indication of the quality of translation.If an ~ system were said to translate, say, IOZ of its input correctly, no one would be likely to consider it unable.The trouble is, quoted figures almost universally hover at the opposite extreme of the speetrtun --around 90X --for }iT systems that vary r~arkably v.r.t, the subjective quality of their output.(Since, to the lay person, "90Z correct" seom8 to constitute minimal acceptable quality. the consistent use of the 90Z figure should not be surprising.)The trouble arises from at least the following human variables: who performs the meaaur~ent?what, exactly, is measured? and by whet standards?Since almost all measurements are performed by the vendor of the system in question. there is obvious room for bias. Second, if one measures '~orda translated correctly," whatever that means, that is a very different thin S from measuring, e.g., "sentences translated correctly." whatever that means. Finally, there is the matter of defining the operative word, "correct'.Since no two translators are likely to agree on what constitutes a "correct" translation --tO say nothing of establishing a rigorous, objective standard --the notion of "correctness" will naturally vary depending on who determines it.It will also vary depending on the amount of time available to perform the measurement: it is widely recognized that an editor viii change more in a given translation, the more time he has to work on it.Finally, "correctness" will vary depending on the use to which the translation is intended to be put, the classical first division being information acquisition vs. dissemination. Co such varieties of text as ic is intended to handle (in the near term, at least); the texts should be chosen by the user, and not divulged to the vendor beforehand except perhaps in the form of a list of words or technical terms (in root form) which appear therein --and that. for not too ion S a period of time before the test.With the reader bearing all of the above in mind, we report the following quality measurements: during the last two years. LRC personnel have measured the quality of translations produced by the L~C MT system in terms of the percentage of sentences (actually. "translation units', since isolated words and phrases appear frequently) which were translated from German into acceptable English; if any change to the translated unit was necessary, however slight, the translation was considered incorrect; the test runs were made once or twice for each text --once, before the text was ever seen by the LRC staff (a "blind" run), and once more, after a few months of system enhancement based in part on the previous results (a "follow-up" run); the project sponsor always provided the LRC with a list of the words and technical terms said to be employed in the text (the list was sometimes incomplete, as one would expect of human compilations of the vocabulary in a large document) .The first run, on a 50-page text, was performed only after the text had been studied for some time; the second and third runs, on an 80-page text, were performed both ways ('blind" and "follow-up');the fourth test was a blind run on a 200-page text.The figures so measured varied from 55% to 85% depending on the text, and on whet~er the test was a blind or follow-up run. A fifth test --a follow-up run on the text used in the fourth test --has already been performed, but the qualitative results are not available at this writing.The results of this run and two more blind runs on ~wo very different texts totalling 160 pages should be available when the conference convenes; these qualitative results are all to be measured by professional technical translators employed by the project sponsor.Any positive conclusions we might draw based on such data will be subject to certain objections.It has been argued that, unless an MT system constitutes an almost perfect translator, it will be useless in any practical setting [Kay, 1980] . As we interpret it, the argument proceeds something like this: (I) there are classical problems in Cemputational Linguistics that remain unsolved to this day (e.g., anaphora, quantifiers, conjunctions);(2) these problems will, in any practical setting, compound on one another so as to result in a very low probability that any given sentence will be correctly translated;(3) it is not in principle possible for a system suffering from malady (1) above to reliably identify and mark its probable errors;(4) if the human post-editor has to check every sentence to determine if it has been correctly translated, then the translation is useless.We accept claims (i) and (3) without question. We consider claim (2) to be a matter for empirical validation --surely not a very controversial contention. As it happens, the substantial body of empirical evidence gathered by the LRC to date argues against this claim. By the time the conference convenes, we will have more definitive data to present, derived by the project sponsor.Regarding (4), we embrace the asaumption that a human post-editor will have to check the entire translation, sentence-by-sentence; but we argue that Kay"s conclusion ("then the translation is useless") is again properly a matter for empirical validation.Meanwhile, we are operating under the assumption that this conclusion is patently false --after all, where translation is taken seriously, human translations are routinely edited via exhaustive review, but no one claims that they are uselessl E. Overall Performance In this section we advance a meaningful. more-or-less objective metric by which any MT system can and should be judged: overall (man/machine) translation performance.The idea is simple.The MT system must achieve two simultaneous goals: first, the system's output must be acceptable to the translator/editor for the purpose of revision;second, the cost of the total effort (including amortization and maintenance of the hardware and software) must be less than the current alternative for like material --human translation followed by post-editing.There may be a significant problem with the reliability of human revisors" judgements (which are nevertheless the best available): the writer has been told by professional technical editors/ translators (potential users of the LRC HT system) that they look forward to editing our machine translations "because the machine doesn't care" [private communication].(That is, they would change more in a machine translation than in a supposedly equivalent human translation because they would not have to worry about insulting the original translator with what s/he might consider "petty" changes.) Thus, the "correctness" standards to be applied to MT will very likely differ from those applied to human translation, simply due to the translation source. Since the errors committed by an MT system seldom resemble errors made by human translators, the possibility of a "Turing test" for an MT system does not exist at the current time.When the conference convenes, we will present such data as we have, bearing on the issue of overall performance using our system. Preliminary data from at least one outside assessment should be available. This information will tend co indicate the readiness of our system for use in a production translation enviroement.We have commented on the relative merits in large-scale application of several linguistic techniques: (a) a phrase-structure grammar; (b) syntactic features;(c) semantic features; (d) scored interpretations;(e) transformations indexed to specific ~ules; (f) a transfer component; and (g) attached procedures to effect translation. We also have presented our findings concerning the practical merits of several computational techniques:(a) a bottom-up, allpaths parser; (b) associated rule-body procedures; (c) spelling correction; (d) chart searching in case of analysis failures; and (e) recursive parsing of parenthetical expressions. We believe these findings constitute useful information about the state of the art in Computational Linguistics.We will not have any fim empirical evidence concerning overall performance until later in 1983, when the LEt )iT system will have been used in-house by our sponsor, for very-large-scale translation experiments.However, we will have some preliminary data from our sponsor source that can be adduced as a basis for extrapolation.(Our sponsor will indeed be using the data for just such a purpose.)This should constitute useful information about the state of the arc in Machine Translation at the University of Texas. To the extent that such findings are positive, they will lend credence Co our claims regarding the practical utility of the methods we employed.
null
null
null
null
Main paper: i introduction: The LRC MT system is one of very few large-scale applications of modern computational linguistics techniques [Lehmann, 1981] . Although the LRC MT system is nearing the status of a production system (a version should be delivered to the project sponsor about the time this conference takes place), it is not at all static; rather, it is an evolving collection of techniques which are continually tested through application to moderately large technical manuals ranging from 50 to 200 pages in length.Thus, our "applied" system remains a research vehicle that serves as an excellent testbed for proposed new procedures.In general, the criteria for our choice of linguistic and computational techniques are three: effectiveness, convenience of use, and efficiency. These criteria are applied in a context where the production of an MT system to be operational in the near-term future is of critical concern. Candidate techniques which do not admit near-term, large-scale application thus suffer an overwhelmins disadvantage. The questions confronting us are, then, twofold: (I) which techniques admit such application; and (2) which of these best satisfy our three general criteria?The first question is usually answered through an evaluation of the likely difficulties and requirements for implementation;the second, through empirical results in the course of experiments.Our evaluation of the LRC ~E system's current status will be based on three points! (a) the system's provision of all the tools necessary for users to effect the complete translation process (including text processing, editing, terminology meinten~ok-up, etc.); (b) quantitae., throughput on a particul.(c) qualitative PerformS'known about overall performliveness (i.e., the number c o be] supported by a single "• , m, cheer [expected] cnrougnp any other personnel necessar-to-day operation of the sySted ] overall costs of translat the norm experienced in human-fine I numbers" will not be a of the conference. but the ninary experiments by our sponthus some reasonable projecti,Our ~een "linguistic ceohniqu%ional techniques" (dlscussqection) is somewhat artificihalidity in a broad sense, aSfrom an overview of the point s section we present the reas of the following linguistia phrase-structure grammar; ures;(c) semantic features;erpretations;(e) transformecific rules; (f) a transfer:Cached procedures to effect ttIn ,e employ a phrase-structur~Y sufficient lexical controls ~ lexical-funccional grammar all our linguistic decision most controversial, and cons the most attention. Generall are two competing claims:rules per se are inadequ~-S., [Cullingford, 1978] );~r forms of gr~ummar (ATNs [~rmational [Petrick, 1973], ; 1972] , word-experts [Small,~rior. We will deal with th.Th~ght that claim that syntax ~propriate models of languas according to this notion, be treated [almost] entirely on the basis of semantics, guided by a strong underlying model of the current situational context, and the expectations that may be derived therefrom. We cannot argue against the claim chat semantics is of critical concern in Natural Language Processing.However, as yet no strong case has been advanced for the abandonment of syntax. Moreover, no system has been deleloped by any of the adherents of the "semantics only" school of thought that has more-or-less successfully dealt with ALL of a vide range --or at least large volume --of ~aterial.A more damaging argument against this school is that every NLP system to date that HAS been applied Co large volumes of text (in the attempt to process ALL of it soma significant sense) has been based on a strong syntactic model of language (see, e.g., [Boater et el., 1980b] , [Damerau, 1981] . [Hendrix et el., 1978] , [Lehmann et el,, 1981] , [Martin et el., 1981] , [Robinson. 1982] , and [Sager. 1981] ).There are other schools of thought that hold phrase-structure (PS) rules in disrespect, while admitting the utility (necessity) of syntax. It is claimed that the phrase-structure formalism is inadequate, and that other forms of gr----r are necessary.(This has been a long-standing position in the linguistic community, being upheld there before most computational linguists jumped on the bandwagon; ironically, this position is nov being challenged by some within the linguistic community itself, who are once again supporting PS rules as a model of natural language use [Gazdar, 1981] .)The anti-PS positions in the NLP c~umiCy are all, of necessity, based on practical considerations, since the models advanced to replace PS rules are formally equivalent in generative power (assuming the PS rules to be augmented, which is always the case in modern NLP systems employing them).But cascaded ATNs [Woods, 1980] , for example, are only marginally different from PS rule systems. It is curious to note that only one of the remaining contenders (s transformational gr--,--r [Damerau, 1981]) has been demonstrated in large-scale application --and even this system employs PS rules in the initial stages of parsing. Other formal systems (e.g., procedural gr=mm-rs [Winograd, 1972] ) have been applied to semantically deep (but linguistically Lmpoverished) domains --or Co excessively lJJniced domains (e.g., Smell's [1980] "word expert" parser seems to have encompassed a vocabulary of less than 20 items).For practical application, it is necessary thaca system be able co accumulate grammar rules, and especially lexical items, aca prodigious rate by current NLP standards. The formalisms competing with PS rules and dictionary entries of modest size seem to be universally characterizable as requiring enormous human resources for their implementation in even a moderately large environment.This should not be surprising: it is precisely the claim of these competing methodologies (chose Chac are ocher than slight variations on PS rules) that language is an exceedingly complex phenomenon, requiring 167 correspondingly complex techniques to model. For "deep understanding" applications, we do noc contest this claim.But we do maintain chat there are some applications that do not seem to require this level of effort for adequate results in a practical setting.Our particular application -automated translation of technical texts --seems CO fell in ChiJ category.The LRC )iT system is currently equipped with something over 400 PS rules describing the Source Language (German), and nearly 10,000 lexical entries in each of two languages (German and the Target Language --English).The current state of our coverage of the SL is that the system is able to parse and acceptably translate the majority of sentences in previously-unseen texts, within the subject areas bounded by our dictionary (specific figures will be related below).By the time this conference convenes, we will have begun the process of adding to the system an analysis grammar of the current TL (English), so that the direction of translation may be reversed; we anticipate bringing the English grammar up to the level of the German gr2m~ar in about a year's time.Our expectations for eventual coverage a~e that around 1,000 PS rules will he adequate co account for almost all sentence forms actually encountered in technical texts, whatever the language.We do not feel constrained to account for every possible sentence form in such texts -nor for sentence forms not found in such texts (as in the case of poetry) --since the required effort would not be cost-effective whether measured in financial or human terms, even if it were possible using current techniques (which ve doubt).Our use of syntactic features is relatively noncontroversial, given our choice of the PS rule formalism.We employ syntactic features for two purposes.One is the usual practice of using such features to restrict the application of PS rules (e.g., by enforcing subject-verb number agreement).The other use is perhaps peculiar to our type of application:once an analysis is achieved, certain syntactic features are employed to control the course (and outcome) of translation --i.e., generation of the TL sentence. The "augmentations" to our PS rules include procedures written in a formal language (so that our linguists do not have Co learn LISP) that manipulate features by restricting their presence. their values if present, etc., and by moving them from node to node in the "parse tree" during the course of the analysis.As is the case with other researchers employing such techniques, we have found this to be an extremely powerful (and of course necessary) means of restricting the activities of the parser.We employ simple semantic features, as opposed to complex models of the domain. Our reasons are primarily practical.First, they seem sufficient for at least the initial stage of our application.Second, the thought of writing complex models of even one complete technical domain is staggering: the operation and maintenance manuals we ar e currently working with (describing a digital telephone switching system) are part of a document collection that is expected to comprise some 100,000 pages of text when complete.A research group the size of ours would not even be able to read that volume of material, much less write the "necessary" semantic models subsumed by it, in any reasonable amount of time. (The group would also have to become electronics engineers, in all likelihood.)If such models are indeed required for our application, we will never succeed.As it turns out, we are doing surprisingly well without such models.In fact, our semantic feature system is not yet being employed to restrict the analysis effort at all; instead, it is used at "transfer time" (described later) to improve the quality of the translations, primarily of prepositions.We look forward to extending the use of semantic features to other parts of speech, and to substantive activity during analysis; but even we were pleased at the results we achieved using only syntactic features.It is a well-known fact that NLP systems tend to produce many readings of their input sentences (unless, of course, constrained to produce the first reading only --which can result in the "right" interpretation being overlooked). The LRC MT system produces all interpretations of the input "sentence" and assigns each of them a score, or plausibility factor [Robinson, 1982] . This technique can be used, in theory, to select a "best" interpretation from the possible readings of an ambiguous sentence.We base our scores on both lexical and grammatical phenomena --plus the types of any spelling/typographical errors, which can sometimes be "corrected" in more than one way.Our experiences relating to the reliability and stability of heuristics based on this technique are decidedly positive: we employ only the (or a) highest-scoring reading for translation (the others being discarded), and our informal experiments indicate that it is very rarely true that a better translation results from a lower-scoring analysis.(Surprisingly often, a number of the higher-scoring interpretations will be translated identically. But poorer translations are frequently seen from the lower-scoring interpretations, demonstrating that the technique is indeed effective.) to syntactic constructs.(Actually, both styles are available, but our linguists have never seen the need or practicality of employing the openended variety).It is clearly more efficient to index tranoformations to specif ic rules when possible; the import of our findings is that it seems to be unnecessary to have open-ended transformations --even during analysis, when one might intuitively expect them to be useful.is frequently argued that translation should be a process of analyzing the Source Language (SL) into a "deep representation" of some sort, then directly synthesizing the Target Language (TL) (e.g., [Carbonnel, 1978] ). We and others [King, 1981] contest this claim -especially with regard to "similar languages" (e.g., those in the Indo-European family). One objection is based on large-scale, long-term trials of the "deep representation" (in MT. called the "pivot language") technique by the MT group at Grenoble [Boitet, 1980a] .After an enormous investment in time and energy, includin E experiments with massive amounts of text, it was decided that the development of a suitable pivot language (for use in Russian-French translation) was probably impossible. Another objection is based on practical considerations: since it is not likely that any NLP system will in the foreseeable future become capable of handling unrestricted input --even in the technical area(s) for which it might be designed --it is clear that a "fail-soft" technique is necessary. It is not obvious that such is possible in a system based solely on a pivot language;a hybrid system capable of dealing with shallower levels of understanding is necessary in a practical setting. This being the case, it seems better in near-term applications to start off with a system employing a "shallow" but usable level of analysis, and deepen the level of analysis as experience dictates, and resources permit.Our alternative is to have a "transfer" component which maps "shallow analyses of sentences" in the SL into "shallow analyses of equivalent sentences" in the TL, from which synthesis then takes place. While we and the rest of the NLP community continue to debate the nature of an adequate pivot language (i.e., the nature of deep semantic models and the processing they entail), we can hopefully proceed to construct a usable system capable of progressive enhancement as linguistic theory becomes able to support deeper models.We employ a transformational component, during both the analysis phase and the translation phase.The transformations, however, are indexed to specific syntax rules rather than loosely keyed Our Transfer procedures (which effect the actual translation of SL into TL) are tightly hound to nodes in the analysis (parse tree) structure [Paxton, 1977] . They are, in effect, suspended procedures --the same procedures that constructed the corresponding parse tree nodes to begin with. This is to be preferred over a more general, loose association based on syntactic constructs because, aside from its advantage in sheer computational efficiency, it eliminates the possibility that the '~rong" procedure can be applied to a construct.The only real argument against this technique, as we see it, is based on space considerations: to the extent that different constructs share the same transfer operations, replication of the procedures chat /~plemenc said operations (and editing effort to modify them) is possible.We have not noticed this to be a problem.For a while, our system load-up procedure searched for duplicates of this nature and eliminated them; however, the gains turned out to be minimal N different constructs typically do require different operetions.llI COMPUTATIONAL TECHNIQUES ~qFLOYED Again, our separation of "linguistic" from "computational" techniques is somewhat artificial. but nevertheless useful.In this section we present the reasons for our use of the following cmaputation~l techniques: It also received our greatest experimental scrutiny. We have collected • substantial body of empirical evidence relating Co parsing techniques. Since the evidence and conclusions require lengthy discussion, and are presented elsewhere [Slocum, 1981] , we will only briefly s~rize the results. The evidence indicates that our use of an all-paths bottom-up parser is justified, given the current state of the art in Computational Linguistics.Our reasons are the following: first, the dreaded "exponential explosion" of processing time has not appeared (and our sr---~=r and test texts are among the largest in the world), but instead, processing time appears Co be linear with sentence length --even though our system produces all possible interpretations; second, top-down parsing methods suffer inherent disadvantages in efficiency, and bottom-up parsers can be and have been augmented with "top-down filtering" to restrict the syntax rules applied to those that an all-paths top-down parser would apply;third, it is difficult to persuade a top-down parser to continue the analysis effort to the end of the sentence, when it blocks somewhere in the middle --which makes the implementation of "fail-soft" techniques that much more difficult;and lastly, the lack of any strong notion of how to construct a "best-path" parser, coupled with the raw speed of well-unplemented parsers, implies that an all-paths parser which scores interpretations and can continue the analysis to the end of the sentence is best in a practical application such as ours.We associate a procedure directly with each individual syntax rule, and evaluate it as soon as the parser determines the rule to be (seemingly) applicable [Pratt, 1973; Hendrix, 1978 features/values of nodes in the tree --i.e., no knovledse of LISP is necessary to code effective procedures. Since these procedures are compiled into LISP, all the power of LISP is available as necessary. The chief linguist on our project, who has • vague knowledge of LISP, has employed OR and AND operators to a significant extent (we didn't bother to include them in the specifications of the formal language, though we obviously could have), and on rare occasions has resorted to using COND. No other calls to true LISP functions (as opposed to our formal operators, which are few and typically quite primitive) have seemed necessary. nor has this capability been requested, to date. The power of our rule-body procedures seems to lie in the choice of features/values that decorate the nodes, rather than the processin E capabilities of the procedures themselves.There are limitations and dansers to spelling correction in general, but we have found it to be an indispensable component of an applied system.People do make spelling and typographical errors, as is well known; even in "polished" documents they appear with surprising frequency (about every other PaSe, in our experience).Arguments by LISP programmers (re: INTERLISP's DWIM) aside, users of applied NLP systems distinctly dislike being confronted with requests for clarification --or, worse, unnecessary ~ailure --in lieu of automated spelling correction. Spelling correction. therefore, is necessary.Luckily, almost all such errors are treatable with simple techniques: single-letter additions, omissions, and mistakes, plus two-or three-letter transpositions account for almost all mistakes. Unfortunately, itis not infrequently the case that there is more than one way to "correct" a mistake (i.e., resulting in different corrected versions).Even a human cannot always determine the correct form in isolation, and for NLP systems it is even more difficult.There is yet another problem with automatic spelling correction: how much to correct. Given unlimited rein, any word can be "corrected"Co any other. Clearly there must be limits, but what are they?Our informal findings concerning how much one may safely "correct" in an application such as ours are these: the few errors chat simple techniques ha~e not handled are almost always bizarre (e.g., repeated syllables or larger portions of words) or highly unusual (e.g., blanks inserted within words); correction of more than a one error in a word is dangerous (it is better to treat the word as unknown, hence a noun); and "correction" of errors which have converted one word into another (valid in isolation)should not be tried.In the event of failure to achieve a comprehensive analysis of the sentence, a system such as ours --which is to be applied to hundreds of thousands of pages of text --cannot indulge in the luxury of simply replying with an error message statingChat the sentence cannot be interpreted.Such behavior is a significant problem, one which the NLP commuuity has failed to come to grips with in any coherent fashion.There have, at least, been some forays.Weishedel and Black [1980] discusa techniques for interacting with the linguist/developer to identify insufficiencies in the grammar. This is fine for development purposes. But, of course, in an applied system the user will be neither the developer nor a linguist, so this approach has no value in the field.Rayes and Mouradlan [1981] discuss ways of allowing the parser to cope with ungr-----tical utterances; this work is in its infancy, but it is stimulating nonetheless. We look forwardto experimenting with similar techniques in our system.What we require now, however, is a means of dealing with "ungrammatical" input (whether through the human's error or the shortcomings of our own rules) that is highly efficient, sufficiently general to account for a large, unknown range of such errors on its first outing, and which can be implemented in a short period of time.We found just such a technique three years ago: a special procedure (invoked when the analysis effort has been carried through to the end of the sentence) searches through the parser's chart to find the shortest path from one end to the other; this path represents the fewest, longest-spanning phrases which were constructed durinE the analysis.Ties are broken by use of the standard scoring mechanism that prwides each phrase in the analysis with a score, or plausibility measure (discussed earlier). We call this procedure "phrasal analysis'. To our knowledge, no other NLP system relies on a such a general technique for searching the parser's chart when an analysis effort has failed.We think that phrasal analysis --which is simple and independent of both language and grammar --could be useful in ocher applications of NLP technology, such as natural language interfaces to databases.Few ~LP systems have dealt with parenthetical expressions;but MT researchers know well that these constructs appear in abundance in technical texts.We deal with this phenomenon in the following way: rather than treating parentheses as lexical items, we make use of LISP's natural treatment of them as list delimiters, and treat the resulting subliats as individual "words" in the sentence; these '~ords" are "lexically analyzed" via recursive calls to the parser. Aside from the elegance of the treatment, this has the advantage that "ungra---atical" parenthetical expressions may undergo phrasal analysis and thus become single-phrase entities as far as the analysis of the encompassing sentence is concerned;thus, ungr----atical parenthetical expressions need not result in ungrammatical (hence poorly handled) sentences.No NLP system is likely to to be successful in isolation:an enviro,--ent of support tools is necessary for ultimate acceptance on the part of prospective users.The following support tools, we think, constitute a minimum workable enviro,--ent for both development and use: a DBMS for handling lexical entries; validation programs that verify the admissability of all linguistic rules (gr---.ar, lexicons, transformations, etc.) accordin E to a set of formal specifications; dictionary programs that search through large numbers of proposed new lexical entries (words, in all relevant languages) to determine which entries are actually new, and which appear to replicate existing entries; defaulting programs that "code" new lexical entries in the NLP system's chosen formalism automatically, given only the root forms of the words and their categories, using empirically determined best guesses based on the available dictionary database entries plus whatever orthographic information is available in the root forms; and benchmark programs to test the integrity of the NLP system after significant modifications [Slocum, 1982] . A DB}~ for handling grammar rules is also a good idea. paragraphs, multi-column tables, flowcharts, figure labels, and the like; a powerful on-line editing program with special capabilities (such as single-keystroke commands to look up words in on-line dictionaries) in addition to the normal editing commands (almost all of which should be invokable with a single keystroke); and also, perhaps, (assess to) a "term databank," i.e., an on-line database of technical terms used in the subject area(s) to be covered by the ~ system. The LRC MT system already provides all of the tools mentioned above, with the exception of the text editor and terminology database (both of which our sponsor viii provide).All of this comes in a single intngraCed working enviro-~ent, so that our linguists and lexicographers can implement changes and test them i~nediataly for their effects on translation quality, and modify or delete their additions with ease, if desired.The average performance of the LRC MT system when translating technical manuals from German into English, runnin S in compiled INTERLISP on a DEC 2060 with over a million words of physical m~ory, has been measured at slightly under 2 seconds of CPU time per input word; this includes storage management (the garbage collector alone cousmes 45Z of all CPU time on this limited-address-space machine), paging, swapping, and I/0 ~ that is, all forum of overhead. Our experience on the 2060 involved the translation of some 330 pages of text, in three segments, over a two year period.On our Symbolica LM-2 Lisp Machine. with 256K vords of physical memory, preliminary measurements indicate an average prefornance of 6-10 seconds (real time) per input word, likewise including all forms of overhead.Our LM-2 experience to date has involved the translation of about 200 pages of text in a single run.The paging rate indicates that, with added memory (512K words is "standard" on these machines), we could expect a significant reduction in this performance figure. With a faster, second-generation Lisp Machine, we would expect a more substantial reduction of real-time processing requirements. We hope to have had the opportunity to conduct an experiment on at least one such machine, by the time this conference convenes.Measuring MT system throughput is one thin S. Measuring "machine translation quality" is quite another, since the standards for measurement (and for interpretin S the measurements) are little understood, and vary widely. Thus, "quality" measurements are of little validity, However, because there is usually a considerable amount of lay interest in such n~--bers, we shall endeavor to indicate why they are basically meaningless, and then report our findings for the benefit of those who feel a need to know.Certainly it is the case that "correctness" numbers can theoreticallygive some indication of the quality of translation.If an ~ system were said to translate, say, IOZ of its input correctly, no one would be likely to consider it unable.The trouble is, quoted figures almost universally hover at the opposite extreme of the speetrtun --around 90X --for }iT systems that vary r~arkably v.r.t, the subjective quality of their output.(Since, to the lay person, "90Z correct" seom8 to constitute minimal acceptable quality. the consistent use of the 90Z figure should not be surprising.)The trouble arises from at least the following human variables: who performs the meaaur~ent?what, exactly, is measured? and by whet standards?Since almost all measurements are performed by the vendor of the system in question. there is obvious room for bias. Second, if one measures '~orda translated correctly," whatever that means, that is a very different thin S from measuring, e.g., "sentences translated correctly." whatever that means. Finally, there is the matter of defining the operative word, "correct'.Since no two translators are likely to agree on what constitutes a "correct" translation --tO say nothing of establishing a rigorous, objective standard --the notion of "correctness" will naturally vary depending on who determines it.It will also vary depending on the amount of time available to perform the measurement: it is widely recognized that an editor viii change more in a given translation, the more time he has to work on it.Finally, "correctness" will vary depending on the use to which the translation is intended to be put, the classical first division being information acquisition vs. dissemination. Co such varieties of text as ic is intended to handle (in the near term, at least); the texts should be chosen by the user, and not divulged to the vendor beforehand except perhaps in the form of a list of words or technical terms (in root form) which appear therein --and that. for not too ion S a period of time before the test.With the reader bearing all of the above in mind, we report the following quality measurements: during the last two years. LRC personnel have measured the quality of translations produced by the L~C MT system in terms of the percentage of sentences (actually. "translation units', since isolated words and phrases appear frequently) which were translated from German into acceptable English; if any change to the translated unit was necessary, however slight, the translation was considered incorrect; the test runs were made once or twice for each text --once, before the text was ever seen by the LRC staff (a "blind" run), and once more, after a few months of system enhancement based in part on the previous results (a "follow-up" run); the project sponsor always provided the LRC with a list of the words and technical terms said to be employed in the text (the list was sometimes incomplete, as one would expect of human compilations of the vocabulary in a large document) .The first run, on a 50-page text, was performed only after the text had been studied for some time; the second and third runs, on an 80-page text, were performed both ways ('blind" and "follow-up');the fourth test was a blind run on a 200-page text.The figures so measured varied from 55% to 85% depending on the text, and on whet~er the test was a blind or follow-up run. A fifth test --a follow-up run on the text used in the fourth test --has already been performed, but the qualitative results are not available at this writing.The results of this run and two more blind runs on ~wo very different texts totalling 160 pages should be available when the conference convenes; these qualitative results are all to be measured by professional technical translators employed by the project sponsor.Any positive conclusions we might draw based on such data will be subject to certain objections.It has been argued that, unless an MT system constitutes an almost perfect translator, it will be useless in any practical setting [Kay, 1980] . As we interpret it, the argument proceeds something like this: (I) there are classical problems in Cemputational Linguistics that remain unsolved to this day (e.g., anaphora, quantifiers, conjunctions);(2) these problems will, in any practical setting, compound on one another so as to result in a very low probability that any given sentence will be correctly translated;(3) it is not in principle possible for a system suffering from malady (1) above to reliably identify and mark its probable errors;(4) if the human post-editor has to check every sentence to determine if it has been correctly translated, then the translation is useless.We accept claims (i) and (3) without question. We consider claim (2) to be a matter for empirical validation --surely not a very controversial contention. As it happens, the substantial body of empirical evidence gathered by the LRC to date argues against this claim. By the time the conference convenes, we will have more definitive data to present, derived by the project sponsor.Regarding (4), we embrace the asaumption that a human post-editor will have to check the entire translation, sentence-by-sentence; but we argue that Kay"s conclusion ("then the translation is useless") is again properly a matter for empirical validation.Meanwhile, we are operating under the assumption that this conclusion is patently false --after all, where translation is taken seriously, human translations are routinely edited via exhaustive review, but no one claims that they are uselessl E. Overall Performance In this section we advance a meaningful. more-or-less objective metric by which any MT system can and should be judged: overall (man/machine) translation performance.The idea is simple.The MT system must achieve two simultaneous goals: first, the system's output must be acceptable to the translator/editor for the purpose of revision;second, the cost of the total effort (including amortization and maintenance of the hardware and software) must be less than the current alternative for like material --human translation followed by post-editing.There may be a significant problem with the reliability of human revisors" judgements (which are nevertheless the best available): the writer has been told by professional technical editors/ translators (potential users of the LRC HT system) that they look forward to editing our machine translations "because the machine doesn't care" [private communication].(That is, they would change more in a machine translation than in a supposedly equivalent human translation because they would not have to worry about insulting the original translator with what s/he might consider "petty" changes.) Thus, the "correctness" standards to be applied to MT will very likely differ from those applied to human translation, simply due to the translation source. Since the errors committed by an MT system seldom resemble errors made by human translators, the possibility of a "Turing test" for an MT system does not exist at the current time.When the conference convenes, we will present such data as we have, bearing on the issue of overall performance using our system. Preliminary data from at least one outside assessment should be available. This information will tend co indicate the readiness of our system for use in a production translation enviroement.We have commented on the relative merits in large-scale application of several linguistic techniques: (a) a phrase-structure grammar; (b) syntactic features;(c) semantic features; (d) scored interpretations;(e) transformations indexed to specific ~ules; (f) a transfer component; and (g) attached procedures to effect translation. We also have presented our findings concerning the practical merits of several computational techniques:(a) a bottom-up, allpaths parser; (b) associated rule-body procedures; (c) spelling correction; (d) chart searching in case of analysis failures; and (e) recursive parsing of parenthetical expressions. We believe these findings constitute useful information about the state of the art in Computational Linguistics.We will not have any fim empirical evidence concerning overall performance until later in 1983, when the LEt )iT system will have been used in-house by our sponsor, for very-large-scale translation experiments.However, we will have some preliminary data from our sponsor source that can be adduced as a basis for extrapolation.(Our sponsor will indeed be using the data for just such a purpose.)This should constitute useful information about the state of the arc in Machine Translation at the University of Texas. To the extent that such findings are positive, they will lend credence Co our claims regarding the practical utility of the methods we employed. Appendix:
null
null
null
null
{ "paperhash": [ "robinson|diagram:_a_grammar_for_dialogues", "king|design_characteristics_of_a_machine_translation_system", "slocum|a_practical_comparison_of_parsing_strategies", "boitet|russian-french_at_geta:_outline_of_the_method_and_detailed_example", "boitet|present_and_future_paradigms_in_the_automatized_translation_of_natural_languages.", "weischedel|if_the_parser_fails", "hayes|flexible_parsing", "carbonell|knowledge-based_machine_translation.", "hendrix|developing_a_natural_language_interface_to_complex_data", "woods|syntax,_semantics,_and_speech", "pratt|a_linguistics_oriented_programming_language", "damerau|operating_statistics_for_the_transformational_question_answering_system", "woods|cascaded_atn_grammars", "cullingford|script_application:_computer_understanding_of_newspaper_stories." ], "title": [ "DIAGRAM: a grammar for dialogues", "Design Characteristics of a Machine Translation System", "A Practical Comparison of Parsing Strategies", "Russian-French at GETA: Outline of the Method and Detailed Example", "Present and Future Paradigms in the Automatized Translation of Natural Languages.", "If The Parser Fails", "Flexible Parsing", "Knowledge-Based Machine Translation.", "Developing a natural language interface to complex data", "Syntax, Semantics, and Speech", "A Linguistics Oriented Programming Language", "Operating Statistics for the Transformational Question Answering System", "Cascaded ATN Grammars", "Script application: computer understanding of newspaper stories." ], "abstract": [ "An explanatory overview is given of DIAGRAM, a large and complex grammar used in an artificial intelligence system for interpreting English dialogue. DIAGRAM is an augmented phrase-structure grammar with rule procedures that allow phrases to inherit attributes from their constituents and to acquire attributes from the larger phrases in which they themselves are constituents. These attributes are used to set context-sensitive constraints on the acceptance of an analysis. Constraints can be imposed by conditions on dominance as well as by conditions on constituency. Rule procedures can also assign scores to an analysis to rate it as probable or unlikely. Less likely analyses can be ignored by the procedures that interpret the utterance. For every expression it analyzes, DIAGRAM provides an annotated description of the structure. The annotations supply important information for other parts of the system that interpret the expression in the context of a dialogue.\nMajor design decisions are explained and illustrated. Some contrasts with transformational grammars are pointed out and problems that motivate a plan to use metarules in the future are discussed. (Metarules derive new rules from a set of base rules to achieve the kind of generality previously captured by transformational grammars but without having to perform transformations on syntactic analyses.)", "This paper distinguishes a set of criteria to be met by a machine translation system (EUROTRA) currently being planned under the sponsorship of the Commission of the European Communities and attempts to show the effect of meeting those criteria on the overall system design.", "INTRODUCTION Although the l i terature dealing with formal and natural languages abounds with theoretical arguments of worstcase performance by various parsing strategies [e.g. , Grif f i ths & Petrick, 1965; Aho & Ullman, 1972; Graham, Harrison & Ruzzo, Ig80], there is l i t t l e discussion of comparative performance based on actual practice in understanding natural language. Yet important practical considerations do arise when writ ing programs to understand one aspect or another of natural language utterances. Where, for example, a theorist wi l l characterize a parsing strategy according to i ts space and/or time requirements in attempting to analyze the worst possible input acc3rding to ~n arbi t rary grammar s t r i c t l y l imited in expressive power, the researcher studying Natural Language Processing can be jus t i f ied in concerning himself more with issues of practical performance in parsing sentences encountered in language as humans Actually use i t using a grammar expressed in a form corve~ie: to the human l inguist who is writ ing i t . Moreover, ~ r y occasional poor performance may be quite acceptabl:, part icular ly i f real-time considerations are not invo~ed, e.g., i f a human querant is not waiting for the answer to his question), provided the overall average performance is superior. One example of such a situation is o f f l ine Machine Translation.", "This paper is an attempt to present the computer models and linguistic strategies used in the current version of the Russian-French translation system developed at GETA, within the framework of several other applications which are developed in a parallel way, using the same computer system. This computer system, called ARIANE-78, offers to linguists not trained in programming an interactive environment, together with specialized metalanguages in which they write linguistic data and procedures (essentially, dictionaries and grammars) used to build translation systems. In ARIANE-78, translation of a text occurs in six steps : morphological analysis, multilevel analysis, lexical transfer, structural transfer, syntactic generation, morphological generation. To each such step corresponds a computer model (nondeterministic finite-state string to tree transducer, tree to tree transducer,...), a metalanguage, a compiler and execution programs. The units of translation are not sentences, but rather one or several paragraphs, so that the context usable, for instance to resolve anaphores, is larger than in other secondgeneration systems.", "Useful automatized translation must be considered in a problem-solving setting, composed of a linguistic environment and a computer environment. We examine the facets of the problem which we believe to be essential, and try to give some paradigms along each of them. Those facets are the linguistic strategy, the programming tools, the treatment of semantics, the computer environment and the types of implementation.", "The unforgiving nature of natural language components when someone uses an unexpected input has recently been a concern of several projects. For instance, Carbonell (1979) discusses inferring the meaning of new words. Hendrix, e t .a l . (1978) describe a system that provides a means for naive users to define personalized paraphrases and that l i s ts the items expected next at a point where the parser blocks. Weischedel, e t .a l . (1978) show how to relax both syntactic and semantic constraints such that some classes of ungrammatical or semantically inappropriate input are understood. Kwasny aod Sondheimer (1979) present techniques for understanding several classes of syntactically il l-formed input. Codd, e t .a l . (1978) and Lebowitz (1979) present alternatives to top-down, le f t to r igh t parsers as a means of dealing with some of these problems.", "When people use natural language in natural settings, they often use it ungrammatically, missing out or repeating words, breaking-off and restarting, speaking in fragments, etc., Their human listeners are usually able to cope with these deviations with little difficulty. If a computer system wishes to accept natural language input from its users on a routine basis, it must display a similar indifference. In this paper, we outline a set of parsing flexibilities that such a system should provide. We go on to describe FlexP. a bottom-up pattern-matching parser that we have designed and implemented to provide these flexibilities for restricted natural language input to a limited-domain computer system.", "Abstract : This paper discusses knowledge-based machine translation research at Yale University Artificial Intelligence Laboratory. Our paradigm, illustrated by several working computer programs, is to analyze the source text into a language-free representation, apply world knowledge to infer information implicit in the input text, and generate the translation in various target languages. (Author)", "Aspects of an intelligent interface that provides natural language access to a large body of data distributed over a computer network are described. The overall system architecture is presented, showing how a user is buffered from the actual database management systems (DBMSs) by three layers of insulating components. These layers operate in series to convert natural language queries into calls to DBMSs at remote sites. Attention is then focused on the first of the insulating components, the natural language system. A pragmatic approach to language access that has proved useful for building interfaces to databases is described and illustrated by examples. Special language features that increase system usability, such as spelling correction, processing of incomplete inputs, and run-time system personalization, are also discussed. The language system is contrasted with other work in applied natural language processing, and the system's limitations are analyzed.", "Abstract : The paper attempts to provide an introduction to the techniques and results which have come out of work in computational linguistics which have special relevance to the design of speech understanding systems. The author attempts to trace the development of several important ideas and trends in parsing and syntax and in semantic interpretation.", "A programming language fo r natural language processing programs is descr ibed. Examples of the output of programs w r i t t e n using it are g iven. The reasons fo r various design decisions are discussed. An actual session wi th the system is presented, in which a small fragment of an Engl ish-to-French t r ans la to r is devel oped. Some of the l i m i t a t i o n s of the system are d i s cussed, along wi th plans fo r fu r the r development.", "This paper presents a statistical summary of the use of the Transformational Question Answering (TQA) system by the City of White Plains Planning Department during the year 1978. A complete record of the 788 questions submitted to the system that year is included, as are separate listings of some of the problem inputs. Tables summarizing the performance of the system are also included and discussed. In general, performance of the system was sufficiently good that we believe that the approach being followed is a viable one, and are continuing to develop and extend the system.", "A generalization of the notion of ATN grammar, called a cascaded ATN (CATN), is presented. CATN's permit a decomposition of complex language understanding behavior into a sequence of cooperating ATN's with separate domains of responsibility, where each stage (called an ATN transducer) takes its input from the output of the previous stage. The paper includes an extensive discussion of the principle of factoring -- conceptual factoring reduces the number of places that a given fact needs to be represented in a grammar, and hypothesis factoring reduces the number of distinct hypotheses that have to be considered during parsing.", "Abstract : The report describes a computer story understander which applies knowledge of the world to comprehend what it reads. The system, called SAM, reads newspaper articles from a variety of domains, then demonstrates its understanding by summarizing or paraphrasing the text, or answering questions about it. (Author)" ], "authors": [ { "name": [ "Jane J. Robinson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. King" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Jonathan Slocum" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "C. Boitet", "Nicolas Nedobejkine" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "C. Boitet", "Philippe Chatelin", "P. D. Fraga" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Weischedel", "J. Black" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "P. Hayes", "G. Mouradian" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Carbonell", "Richard E Cullinford", "A. Gershman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Hendrix", "E. Sacerdoti", "Daniel Sagalowicz", "Jonathan Slocum" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. Woods" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "V. Pratt" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "F. J. Damerau" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. Woods" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. E. Cullingford" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "17788520", "16993377", "12179283", "5782458", "1807458", "41770670", "11007680", "60052028", "15391397", "62532956", "469505", "22605062", "6169596", "60708295" ], "intents": [ [ "background" ], [], [ "background" ], [], [], [], [], [], [], [], [ "methodology" ], [], [ "result" ], [ "background" ] ], "isInfluential": [ false, false, false, false, false, false, false, false, false, false, false, false, false, false ] }
Problem: The paper aims to discuss the linguistic and computational techniques utilized in the current version of the Machine Translation system developed at the Linguistics Research Center of the University of Texas, under contract to Siemens AG in Munich, West Germany. Solution: The hypothesis of the paper is that the chosen linguistic and computational techniques, based on criteria of effectiveness, convenience of use, and efficiency, will lead to the successful development of a Machine Translation system ready for application in a production environment.
504
0.017857
null
null
null
null
null
null
null
null
52e1326b87fc9d379211f24371d6f3035f741751
17959703
null
Investigating the Possibility of a Microprocessor-Based Machine Translatton System
This paper describes an on-goin~ research project being carried out by staff and students ac the Centre for Computational
{ "name": [ "Somers, Harold L." ], "affiliation": [ null ] }
null
null
First Conference on Applied Natural Language Processing
1983-02-01
34
1
null
, it has of course for a long time been standard practice in other areas of knowledge-based programming (Newell, 1973; Davis & King, 1977) .The third principle now current in MT and to be incorporated in Bede is that the translation process should be modular. This approach was a feature of the earliest 'second generation' systems (of. Vauquois, 1975:33) , and is characterised by the general notion that any complicated computational task is best tackled by dividing it up into smaller more or less independent sub-casks which communicate only by means of a strictly defined interface protocol (Aho et al, 1974) . This is typically achieved in the bit environment by a gross division of the translation process into analysis of source language and synthesis of target language, possibly with an intermediate transfer sca~e (see !.D below), with these phases in turn sub-divided, for example into morphological, lexical and syntactico-semantlc modules.This modularity may be reflected both in the linguistic organisation of the translation process and in the provision of software devices specifically tailored to the relevant sub-task (Vauquois, 1975:33) . This is the case in Bede, where for each sub-task a grammar interpreter is provided which has the property of being no more powerful than necessary for the task in question.This contrasts with the approach taken in TAt~-H~c~o (TAUM, Ig73), where a single general-purpose device (Colmerauer's (1970 individual formalisms and processors: these are described in detail in the second half of this paper. B. The microproce,ssor environment !t is in the microprocessor basis that the principle interest in this system lies, and, as mentioned above, the main concern is the effects of the restrictions that the environment imposes. Development of the Bede prototype is presently caking place on ZRO-based machines which provide 6Ak bytes of in-core memory and 72Ok bytes of peripheral store on two 5-I/~" double-sided double-density floppy disks. The intention is that any commercial version of Bede would run on more powerful processors with larger address space, since we feel chat such machines will soon rival the nopularity of the less powerful ZRO's as the standard desk-cop hardware.Pro~rarzninR so far has been in Pascal-" (Sorcim, 197q) , a Pascal dialect closely resembling UCSD Pascal, but we are conscious of the fact that both C (Kernighan & Ritchie, 1978) and BCPL (Richards & Whitby-Strevens, Ig7g) may be more suitable for some of the software elements, and do not rule out completing the prototype in a number of languages. This adds the burden of designing compatible datastructures and interfaces, and we are currently investigating the relative merits of these languages.Portability and efficiency seem to be in conflict here.Microprocessor-based MT contrasts sharply with the mainframe-based activity, where the significance of problems of economy of storage and efficiency of programs has decreased in recent years.The possibility of introducing an element of human interaction with the system (of. Kay, Ig80; Melby, 1981) is also highlighted in this environment.Contrast systems like SYSTRAN (Toma, 1977) and GETA (Vauquois, 1975, lq7g; Boiler & Nedobejkine,IggO) which work on the principle of large-scale processing in batch mode.Our experience so far is chat the economy and efficiency in data-structure design and in the elaboration of interactions between programs and data and between different modules is of paramount importance.While it is relatively evident thac large-scale HT can be simulated in the microprocessor environment, the cost in real time is tremendous: entirely new design ~nd implementation strategies seem co be called for. The ancient skills of the programmer that have become eroded by the generosity afforded by modern mainframe configurations become highly valued in this microprocessor application.The state of the art of language processing is such chat the analysis of a significant range of syntactic patterns has been shown to be possible, and by means of a number of different approaches. Research in this area nowadays is concentrated on the treatment of more problematic constructions (e.g. Harcus, lqgO).This observation has led us tO believe that a degree of success in a small scale MT project can be achieved via the notion of restricting the complexity of acceptable input, so that only constructions that are sure tc ne Correctly analysed are permitted. is purely deterministic. While this approach would be quite unsuitable for a larRescale general purpose HT system, in the present context -where the problem can be minimised -~c seems Co be a reasonable approach.Our own model for the Bede tnCerlingua has noc yet been finalised.We believe this co be an area for research and experimentation once the system software has been more fully developed. ~ur current hypothesis is chat the InterlinRua will cake the form of a canonical representation of the text in which valency-houndness and (deep) ~e will play a significant role.Sentential features such as tense and aspect will be capcured by 'universal' system of values for the languages involved. We feel chat research in chLs area will, when the time comes, be a siEniflcanc and valuable by-product of the project as a whole.In this second half of the paper we present a description of the translation process in Bede, as it is currently envisaged. The process is divided broadly into two parts, analysis and synthesis, the interface between the two being provided by the Interlingua.The analysis module uses a Chart-like structure (cf. Kaplan, 1973) and a series of grammars to produce from the source text the Incerlingua tree structure which serves as input to synthesis,where it is rearranged into a valid surface structure for the target language. The 'translation unit' (TU) is taken co be the sentence, or equivalent (e.g. section heading, title, figure caption). Full details of the rule formalisms are given in Somers (Ig81).The TU is first subjected to a two-stage string-segmentation and 'lemmatlsation' analysis. In the first stage it is compared word by word with a 'stop-list' of frequently occurring words (mostly function words); words not found in the stop-list undergo string-segmentatlon analysis, again on a word by word basis.Stringsegmentation rules form a finite-state grammar of affix-stripping rules ('A-rules') which handle mostly inflectional morphology.The output is a Chart with labelled arcs indicating lexical unit (LU) and possible interpretatio n o£ the stripped affixes, this 'hypothesis' to be confirmed by dictionary look-up.By way of example, consider (I~, a possible French rule, which takes any word ending in -issons (e.g. finissons or h4rissons) and constructs an arc on the Chart recording the hypothesis that the word is an inflected form of an '-it' verb (i.e. finir or *h4rir).(I) V + "-ISSONS" ~ V ~ "-IR" [PERS=I & NUM=PLUR & TENSE=PRES & HOOD=INDIC]At the end of dictionary look-up, a temporary 'sentence dictionary' is created, consisting of copies of the dictionary entries for (only) those LUs found in the current TU. This is purely an efficiency measure.The sentence dictionary may of course include entries for homographs which will later be rejected.The chart then undergoes a two-stage structural analysts.In the first stage, context-sensitive augmented phrase-structure rules ('P-rules') work towards creating a single arc spanning the entire TU.Arcs are labelled with appropriate syntactic class and syncactico-semantic feature information and a trace of the lower arcs which have been subsumed from which the parse tree can be simply extracted.The trivial P-rule (2) iS provided as an examnle. We are in fact still experimenting in this area.For a similar investigation, though on a machine with significantly different time and space constraints, see Slocum (1981).'T-rules'In the second stage of structural analysis, the tree structure implied by the labels and traces on these arcs is disjoined from the Char~ and undergoes general tree-Co-cree-transductions as described by 'T-rules', resulting in a single tree structure representing the canonical form of the TU.• The formalism for the T-rules is similar co that for the P-rules, except in the geometry part, where tree structures rather than arc sequences are defined. Consider the necessarily more complex (though still simplified) example (3~. which regularises a simple English passive. The synthesis T-rules for a given language can be viewed as analogues ~f the T-rules that are used for analysis of that language, though it is unlikely that for syntbes~s the analysis rules could be simpLy reversed, Once the desired structure has been arrived at, the trees undergo a series of context-sensitive rules used to assign mainly syntactic features co the leaves ('L-rules'), for example for the purpose of assigning number and gender concord (etc.).The formalism for the L-rules is aglin similar to that for the p-rules and T-rules, the geOmett'y pert this time definYng a single tree structure with no structural modification implied.A simple example for German is provided here (4). The llst of labelled leaves resulting from the application of L-rules is passed to morphological synthesis (the superior branches are no longer needed), where a finite-state grammar of morpbographemic and afftxation rules ('H-rules') is applied to produce the target string. The formalism for H-rules is much less complex than the A-rule fomelism, the grammar being again straightforwardly deterministic.The only taxing requirement of the M-rule formalism (which, at the ~ime of writing, has not been finalised) is that it must permit a wide variety of string manipulations to be described, and that it must define a transaparent interface with the dictionary.A typical rule for French for example might consist of stipulations concerning information found both on the leaf in question and in the dictionary, as in (5).(5) leaf info.:PEgs-3; HOOD=INDIC dict. info.: CONJ(V)=IRREG assign:Affix "-T" to STEHI(V)The general modularity of the system will have been quite evident.A key factor, as mentioned above, is that each of these grammars is just powerful enough for the cask required of It: thus no computing power is 'wasted' at any of the intermediate stages.At each interface between grammars only a small part of the data structures used by the donating module is required by the receiving module. The 'unwanted' data structures are written to peripheral store co enable recovery of partial s~ructures in the case of failure or mistranslation, though automatic backtracking to previous modules by the system as such is not envisaged as a major component.data used by the system consist of the different sets of l~nguistic rule packages, plus ~he dictionary.The system essentially has one large mu[tilingual dictionary from which numerous software packages generate various subdiccionaries as required either in the :rans[acion process itself, or for lexicographers 153 working on the system. Alphabetical or other structured language-specific listings can be produced, while of course dictionary updating and editing packages are also provided.The system as a whole can be viewed as a collection of Production Systems (PSs) (Newell, 1973; Davis & King, 1977 ; see also Ashman (1982) on the use of PSs in HT) in the way that the rule packages (which, incidentally, as an efficient7 iI~alute, undergo separate syntax verification and 'compilation'into interpretable 'code') operate on the data structure.The system differs from the classical PS setup in distributing its static data over two databases: the rule packages and the dictionary.The combination of the rule packages and the dictionary, the software interfacing these, end the rule interpreter can however be considered as analgous to the rule interpreter of a classical P$.As an experimental research project, Bede provides us with an extremely varied range of computational linguistics problems, ranging from the principally linguistic task of rule-writing, to the essentially computational work of software tmplen~lncatton, with lexicography and terminology playing their part along the way.
null
null
null
null
Main paper: : , it has of course for a long time been standard practice in other areas of knowledge-based programming (Newell, 1973; Davis & King, 1977) .The third principle now current in MT and to be incorporated in Bede is that the translation process should be modular. This approach was a feature of the earliest 'second generation' systems (of. Vauquois, 1975:33) , and is characterised by the general notion that any complicated computational task is best tackled by dividing it up into smaller more or less independent sub-casks which communicate only by means of a strictly defined interface protocol (Aho et al, 1974) . This is typically achieved in the bit environment by a gross division of the translation process into analysis of source language and synthesis of target language, possibly with an intermediate transfer sca~e (see !.D below), with these phases in turn sub-divided, for example into morphological, lexical and syntactico-semantlc modules.This modularity may be reflected both in the linguistic organisation of the translation process and in the provision of software devices specifically tailored to the relevant sub-task (Vauquois, 1975:33) . This is the case in Bede, where for each sub-task a grammar interpreter is provided which has the property of being no more powerful than necessary for the task in question.This contrasts with the approach taken in TAt~-H~c~o (TAUM, Ig73), where a single general-purpose device (Colmerauer's (1970 individual formalisms and processors: these are described in detail in the second half of this paper. B. The microproce,ssor environment !t is in the microprocessor basis that the principle interest in this system lies, and, as mentioned above, the main concern is the effects of the restrictions that the environment imposes. Development of the Bede prototype is presently caking place on ZRO-based machines which provide 6Ak bytes of in-core memory and 72Ok bytes of peripheral store on two 5-I/~" double-sided double-density floppy disks. The intention is that any commercial version of Bede would run on more powerful processors with larger address space, since we feel chat such machines will soon rival the nopularity of the less powerful ZRO's as the standard desk-cop hardware.Pro~rarzninR so far has been in Pascal-" (Sorcim, 197q) , a Pascal dialect closely resembling UCSD Pascal, but we are conscious of the fact that both C (Kernighan & Ritchie, 1978) and BCPL (Richards & Whitby-Strevens, Ig7g) may be more suitable for some of the software elements, and do not rule out completing the prototype in a number of languages. This adds the burden of designing compatible datastructures and interfaces, and we are currently investigating the relative merits of these languages.Portability and efficiency seem to be in conflict here.Microprocessor-based MT contrasts sharply with the mainframe-based activity, where the significance of problems of economy of storage and efficiency of programs has decreased in recent years.The possibility of introducing an element of human interaction with the system (of. Kay, Ig80; Melby, 1981) is also highlighted in this environment.Contrast systems like SYSTRAN (Toma, 1977) and GETA (Vauquois, 1975, lq7g; Boiler & Nedobejkine,IggO) which work on the principle of large-scale processing in batch mode.Our experience so far is chat the economy and efficiency in data-structure design and in the elaboration of interactions between programs and data and between different modules is of paramount importance.While it is relatively evident thac large-scale HT can be simulated in the microprocessor environment, the cost in real time is tremendous: entirely new design ~nd implementation strategies seem co be called for. The ancient skills of the programmer that have become eroded by the generosity afforded by modern mainframe configurations become highly valued in this microprocessor application.The state of the art of language processing is such chat the analysis of a significant range of syntactic patterns has been shown to be possible, and by means of a number of different approaches. Research in this area nowadays is concentrated on the treatment of more problematic constructions (e.g. Harcus, lqgO).This observation has led us tO believe that a degree of success in a small scale MT project can be achieved via the notion of restricting the complexity of acceptable input, so that only constructions that are sure tc ne Correctly analysed are permitted. is purely deterministic. While this approach would be quite unsuitable for a larRescale general purpose HT system, in the present context -where the problem can be minimised -~c seems Co be a reasonable approach.Our own model for the Bede tnCerlingua has noc yet been finalised.We believe this co be an area for research and experimentation once the system software has been more fully developed. ~ur current hypothesis is chat the InterlinRua will cake the form of a canonical representation of the text in which valency-houndness and (deep) ~e will play a significant role.Sentential features such as tense and aspect will be capcured by 'universal' system of values for the languages involved. We feel chat research in chLs area will, when the time comes, be a siEniflcanc and valuable by-product of the project as a whole.In this second half of the paper we present a description of the translation process in Bede, as it is currently envisaged. The process is divided broadly into two parts, analysis and synthesis, the interface between the two being provided by the Interlingua.The analysis module uses a Chart-like structure (cf. Kaplan, 1973) and a series of grammars to produce from the source text the Incerlingua tree structure which serves as input to synthesis,where it is rearranged into a valid surface structure for the target language. The 'translation unit' (TU) is taken co be the sentence, or equivalent (e.g. section heading, title, figure caption). Full details of the rule formalisms are given in Somers (Ig81).The TU is first subjected to a two-stage string-segmentation and 'lemmatlsation' analysis. In the first stage it is compared word by word with a 'stop-list' of frequently occurring words (mostly function words); words not found in the stop-list undergo string-segmentatlon analysis, again on a word by word basis.Stringsegmentation rules form a finite-state grammar of affix-stripping rules ('A-rules') which handle mostly inflectional morphology.The output is a Chart with labelled arcs indicating lexical unit (LU) and possible interpretatio n o£ the stripped affixes, this 'hypothesis' to be confirmed by dictionary look-up.By way of example, consider (I~, a possible French rule, which takes any word ending in -issons (e.g. finissons or h4rissons) and constructs an arc on the Chart recording the hypothesis that the word is an inflected form of an '-it' verb (i.e. finir or *h4rir).(I) V + "-ISSONS" ~ V ~ "-IR" [PERS=I & NUM=PLUR & TENSE=PRES & HOOD=INDIC]At the end of dictionary look-up, a temporary 'sentence dictionary' is created, consisting of copies of the dictionary entries for (only) those LUs found in the current TU. This is purely an efficiency measure.The sentence dictionary may of course include entries for homographs which will later be rejected.The chart then undergoes a two-stage structural analysts.In the first stage, context-sensitive augmented phrase-structure rules ('P-rules') work towards creating a single arc spanning the entire TU.Arcs are labelled with appropriate syntactic class and syncactico-semantic feature information and a trace of the lower arcs which have been subsumed from which the parse tree can be simply extracted.The trivial P-rule (2) iS provided as an examnle. We are in fact still experimenting in this area.For a similar investigation, though on a machine with significantly different time and space constraints, see Slocum (1981).'T-rules'In the second stage of structural analysis, the tree structure implied by the labels and traces on these arcs is disjoined from the Char~ and undergoes general tree-Co-cree-transductions as described by 'T-rules', resulting in a single tree structure representing the canonical form of the TU.• The formalism for the T-rules is similar co that for the P-rules, except in the geometry part, where tree structures rather than arc sequences are defined. Consider the necessarily more complex (though still simplified) example (3~. which regularises a simple English passive. The synthesis T-rules for a given language can be viewed as analogues ~f the T-rules that are used for analysis of that language, though it is unlikely that for syntbes~s the analysis rules could be simpLy reversed, Once the desired structure has been arrived at, the trees undergo a series of context-sensitive rules used to assign mainly syntactic features co the leaves ('L-rules'), for example for the purpose of assigning number and gender concord (etc.).The formalism for the L-rules is aglin similar to that for the p-rules and T-rules, the geOmett'y pert this time definYng a single tree structure with no structural modification implied.A simple example for German is provided here (4). The llst of labelled leaves resulting from the application of L-rules is passed to morphological synthesis (the superior branches are no longer needed), where a finite-state grammar of morpbographemic and afftxation rules ('H-rules') is applied to produce the target string. The formalism for H-rules is much less complex than the A-rule fomelism, the grammar being again straightforwardly deterministic.The only taxing requirement of the M-rule formalism (which, at the ~ime of writing, has not been finalised) is that it must permit a wide variety of string manipulations to be described, and that it must define a transaparent interface with the dictionary.A typical rule for French for example might consist of stipulations concerning information found both on the leaf in question and in the dictionary, as in (5).(5) leaf info.:PEgs-3; HOOD=INDIC dict. info.: CONJ(V)=IRREG assign:Affix "-T" to STEHI(V)The general modularity of the system will have been quite evident.A key factor, as mentioned above, is that each of these grammars is just powerful enough for the cask required of It: thus no computing power is 'wasted' at any of the intermediate stages.At each interface between grammars only a small part of the data structures used by the donating module is required by the receiving module. The 'unwanted' data structures are written to peripheral store co enable recovery of partial s~ructures in the case of failure or mistranslation, though automatic backtracking to previous modules by the system as such is not envisaged as a major component.data used by the system consist of the different sets of l~nguistic rule packages, plus ~he dictionary.The system essentially has one large mu[tilingual dictionary from which numerous software packages generate various subdiccionaries as required either in the :rans[acion process itself, or for lexicographers 153 working on the system. Alphabetical or other structured language-specific listings can be produced, while of course dictionary updating and editing packages are also provided.The system as a whole can be viewed as a collection of Production Systems (PSs) (Newell, 1973; Davis & King, 1977 ; see also Ashman (1982) on the use of PSs in HT) in the way that the rule packages (which, incidentally, as an efficient7 iI~alute, undergo separate syntax verification and 'compilation'into interpretable 'code') operate on the data structure.The system differs from the classical PS setup in distributing its static data over two databases: the rule packages and the dictionary.The combination of the rule packages and the dictionary, the software interfacing these, end the rule interpreter can however be considered as analgous to the rule interpreter of a classical P$.As an experimental research project, Bede provides us with an extremely varied range of computational linguistics problems, ranging from the principally linguistic task of rule-writing, to the essentially computational work of software tmplen~lncatton, with lexicography and terminology playing their part along the way. Appendix:
null
null
null
null
{ "paperhash": [ "king|eurotra_and_its_objectives", "slocum|a_practical_comparison_of_parsing_strategies", "marcus|a_theory_of_syntactic_recognition_for_natural_language", "hutchins|machine_translation_and_machine‐aided_translation", "davis|an_overview_of_production_systems", "vauquois|l’informatique_au_service_de_la_traduction", "bostad|quality_control_procedures_in_modification_of_the_air_force_russian-english_mt_system", "king|eurotra_–_a_european_system_for_machine_translation", "mcnaught|the_translator_as_a_computer_user", "richards|bcpl,_the_language_and_its_compiler", "elliston|computer_aided_translation_-_a_business_viewpoint", "aho|the_design_and_analysis_of_computer_algorithms" ], "title": [ "EUROTRA and its objectives", "A Practical Comparison of Parsing Strategies", "A theory of syntactic recognition for natural language", "Machine Translation and Machine‐Aided Translation", "An overview of production systems", "L’informatique au service de la traduction", "Quality control procedures in modification of the Air Force Russian-English MT system", "EUROTRA – A European System for Machine Translation", "The translator as a computer user", "BCPL, the language and its compiler", "Computer aided translation - a business viewpoint", "The Design and Analysis of Computer Algorithms" ], "abstract": [ "EUROTRA is a machine translation system currently being planned under the auspices of the Commission of the European Communities. From its conception, certain objectives were set up which any European system must meet. These objectives constitute a set of criteria constantly used in making design decisions. This paper takes each of these objectives in turn and attempts to describe its consequences on the overall design of the system. Since EUROTRA is intended to profit as much as possible from the current state of the art in machine translation, an attempt is also made to put it into context by referring to the characteristics of other systems with respect to the same criteria.", "INTRODUCTION Although the l i terature dealing with formal and natural languages abounds with theoretical arguments of worstcase performance by various parsing strategies [e.g. , Grif f i ths & Petrick, 1965; Aho & Ullman, 1972; Graham, Harrison & Ruzzo, Ig80], there is l i t t l e discussion of comparative performance based on actual practice in understanding natural language. Yet important practical considerations do arise when writ ing programs to understand one aspect or another of natural language utterances. Where, for example, a theorist wi l l characterize a parsing strategy according to i ts space and/or time requirements in attempting to analyze the worst possible input acc3rding to ~n arbi t rary grammar s t r i c t l y l imited in expressive power, the researcher studying Natural Language Processing can be jus t i f ied in concerning himself more with issues of practical performance in parsing sentences encountered in language as humans Actually use i t using a grammar expressed in a form corve~ie: to the human l inguist who is writ ing i t . Moreover, ~ r y occasional poor performance may be quite acceptabl:, part icular ly i f real-time considerations are not invo~ed, e.g., i f a human querant is not waiting for the answer to his question), provided the overall average performance is superior. One example of such a situation is o f f l ine Machine Translation.", "Abstract : Assume that the syntax of natural language can be parsed by a left-to-right deterministic mechanism without facilities for parallelism or backup. It will be shown that this 'determinism' hypothesis, explored within the context of the grammar of English, leads to a simple mechanism, a grammar interpreter. (Author)", "The recent report for the Commission of the European Communities on current multilingual activities in the field of scientific and technical information and the 1977 conference on the same theme both included substantial sections on operational and experimental machine translation systems, and in its Plan of action the Commission announced its intention to introduce an operational machine translation system into its departments and to support research projects on machine translation. This revival of interest in machine translation may well have surprised many who have tended in recent years to dismiss it as one of the ‘great failures’ of scientific research. What has changed? What grounds are there now for optimism about machine translation? Or is it still a ‘utopian dream’ ? The aim of this review is to give a general picture of present activities which may help readers to reach their own conclusions. After a sketch of the historical background and general aims (section I), it describes operational and experimental machine translation systems of recent years (section II), it continues with descriptions of interactive (man‐machine) systems and machine‐assisted translation (section III), (and it concludes with a general survey of present problems and future possibilities section IV).", "Abstract : Since production systems were first proposed in 1943 as a general computational mechanism, the methodology has seen a great deal of development and has been applied to a diverse collection of problems. Despite the wide scope of goals and perspectives demonstrated by the various systems, there appear to be many recurrent themes. This paper is an attempt to provide an analysis and overview of those themes, as well as a conceptual framework by which many of the seemingly disparate efforts can be viewed, both in relation to each other, and to other methodologies. Accordingly, the authors use the term 'production system' in a broad sense, and attempt to show how most systems which have used the term can be fit into the framework. The comparison to other methodologies is intended to provide a view of PS characteristics in a broader context, with primary reference to procedurally-based techniques, but with reference also to some of the current developments in programming and the organization of data and knowledge bases.", "Citer cet article Vauquois, B. (1981). L’informatique au service de la traduction. Meta, 26(1), 8–17. https://doi.org/10.7202/004556ar Ce document est protégé par la loi sur le droit d'auteur. L'utilisation des services d'Érudit (y compris la reproduction) est assujettie à sa politique d'utilisation que vous pouvez consulter en ligne. [https://apropos.erudit.org/fr/usagers/politique-dutilisation/]", "The paper gives the background leading to the development of current quality control procedures used in modification of the Russian-English system. A special program showing target language translation differences has become the central control mechanism. Procedures for modification of dictionaries, homographs and lexicals, and generalized linguistic modules are discussed in detail. A final assessment is made of the procedures and the quantitative results that can be obtained when they are used.", "1. Lessons from the past Previous articles in this Journal will have given the reader an idea of the state of the art in currently operational machine translation Systems. This article describes a system whieh is planned, and which it is hoped will be developed by all the Member States of the European Community acting together, within the framework of a single collaborative project. The motivation for such a project is manifold. First, we have learnt a great deal from the Systems which already exist, both in terms of what to do and in terms of what not to do. To take the positive lessons first: the most important, of course, is that machine aided translation is feasible. This lesson is extremely important. After the disappointments of the 60's, it took a great deal of courage to persist in the belief that it was worthwhiie working on machine translation. A great debt is owed to those who did persist, whether they continued to develop commercial Systems with the tools then available or whether they carried on with the research needed to provide a sound basis for more advanced Systems. Had it not been for their stubbornness, machine translation would now be one of those good ideas which somebody once had, but which proved in the end impractical like a perpetual motion machine, for example instead of being a discipline undergoing a period of renaissance and new growth. Secondly, we have learnt that problems which once seemed intractable are not really so. Looking at a book on machine translation written in the early 60's the other day, I was surprised to find the treatment of idioms and of semi-fixed phrases being discussed äs a difficult theoretical problem. Of course, idioms still must be treated, and must be treated with care, but operational Systems have shown us that they can be succesfully translated. This does not mean that no system will ever again translate \"out of sight, out of mind\" äs \"invisible idiot\", but if it does so, it will be for lack of relevant data, not because mechanisms to deal with such phrases are not adequate. It would be possible to make a fairly extensive list of similar problems, which once gave machine translators nightmares but now ortly cause mild insomnia. Suffice it to say that experience with existing Systems has given us the knowledge that such problems can be solved, and the courage to find ever better ways of solving them. At a technical level, too, we have learnt a lot from existing systems. Early, not very succesful, machine translation Systems were dictionary based, essentially taking one word at a time and trying to find its equivalent in the target language. As a fairly natural reaction to the disappointing results obtained by such a method, there was something of a swing later to concentrating on the linguistic änalysis parts of the system, those parts which tried to determine the underlying structure of the input text in order to translate at a \"deeper\" level. Practical experience has taught us that even though änalysis is cruciäl, dictionaries retain a great importance, in that any working system will rely heavily on large dictionaries, sometimes containing whole expressions äs single entries, rieh in static linguistic Information on each entry and serving äs essential data for the translation process. So we have learnt to pay attention both to the initial design and coding of dictionaries, and to their manipulation in terms of large data bases which must be constantly updated and maintained. Based on rather more negative experience, we have learnt that system design is all important in a machine translation system. This can be said rather differently, by saying that we have discovered that a translation system is necessarily going to be big and that big Systems need special treatment. No one person, or even group of persons, can hope to keep a large Computer program under control if it is written äs an amorphous riiass. It will be impossible, when things", "In recent years, a large amount of work has been done in the field of computational linguistics which should be of immediate interest to translators, and perhaps also to interpreters. It is the purpose of this paper to bring to the attention of professional linguists the usefulness of, and benefit to be derived from, work currently in progress in this field, and to allay the wariness and scepticism that they may feel towards computers (cf. Arthern 1978, Lawson 1979). We hope to demonstrate that, firstly, given its nature and purpose, auto-matic translation does not work in competition with human translators, but rather alongside them; and secondly, that there are several other computer aids for translators as yet only available to large bodies employing translators, but which in the very near future should be available to the individual freelance translator working at home on the slimmest of budgets!", "Foreword 1. The BCPL philosophy 2. The main features of BCPL 3. Advanced facilities 4. The library, language extensions, and machine independence 5. Debugging and error handling 6. The BCPL lexical and syntax analyser 7. Computer portability 8. Language definition References Index.", "Before one starts to look for a particular solution, it is necessary to define the precise needs of the problem. Such is the case with our Company; the solution we are pursuing is tailored to the specific communication needs we have identified and it may well not be the most effective direction for another Company. In order to understand why we have chosen our particular path, it is helpful to explain briefly the Company environment.", "From the Publisher: \nWith this text, you gain an understanding of the fundamental concepts of algorithms, the very heart of computer science. It introduces the basic data structures and programming techniques often used in efficient algorithms. Covers use of lists, push-down stacks, queues, trees, and graphs. Later chapters go into sorting, searching and graphing algorithms, the string-matching algorithms, and the Schonhage-Strassen integer-multiplication algorithm. Provides numerous graded exercises at the end of each chapter. \n \n \n0201000296B04062001" ], "authors": [ { "name": [ "M. King", "S. Perschke" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Jonathan Slocum" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Mitchell P. Marcus" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. J. Hutchins" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Randall Davis", "Jonathan J. King" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "B. Vauquois" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Bostad" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. King" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. McNaught", "H. Somers" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. Richards" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Elliston" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "A. Aho", "J. Hopcroft", "J. Ullman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "144735503", "12179283", "6616065", "17995807", "53531425", "119778491", "48524235", "33545038", "22525403", "60574162", "26855299", "29599075" ], "intents": [ [], [ "background" ], [], [], [], [], [], [], [], [], [], [] ], "isInfluential": [ false, false, false, false, false, false, false, false, false, false, false, false ] }
- Problem: The paper aims to describe an ongoing research project at the Centre for Computational Linguistics, focusing on the translation process in Bede and the interpretation of stripped affixes. - Solution: The hypothesis being investigated is the construction of rules, such as the example rule for French verbs ending in "-issons," to determine the inflected form of the verb based on specific criteria, which will be confirmed through dictionary look-up and structural analysis.
504
0.001984
null
null
null
null
null
null
null
null
f436e0ebe01dd541091841a905295c41824cbf90
12556703
null
Distinguishing Fact From Opinion and Events From Meta-Events
A major problem In automatically anaiy~dng the text of m~lltary messaq;les In m~lar to synthesize data b~le elements is separating fact from opinion, I.e., Ident~n9
{ "name": [ "Montgomery, Christine A." ], "affiliation": [ null ] }
null
null
First Conference on Applied Natural Language Processing
1983-02-01
14
7
null
Owigglns and Silva 1981.] The objective of this research has been to provide an automated capability to supplement the presently largely manuel, isbor-intermive task of meintainlng the currency of data bases which derive their InformaUon elements from the text of military rues=ages. Although some effort has been devoted to primarily Interactive approaches to the problem, and to mwmages which have highly predictable cniumnar summary formats, the majority of the research and development work has concentrated on the more difficult task of analyzing unformatted narrative l:axt with user Interaction limited to occasional sssistmlca to the automated system.A teethed system called MATRES has been constructed In Prolog to run under the UNiX operating system on the POP 11/70. MATRES Is a knowledge based system for undorotanding the natural language text of eventoriented massages tn the domains of air activities and space/missile (S&M) activities. The knowledge structures In MATRES, called "templates a, are essentially frames or scripts describing entities and events, which answer the military u~er's bsolc questions about these 1. This work has been carried out under the sponsorship of the Rome Air Development Center (RADC), U. S. Air Force Systems Command, Grlffiss Air :'~,-~e 8aBe, New York.MlenmmmL as Illustrated In the simplified view of an event template prlmentod In Figure 1 .The templatee are hierarchically organized; lower level templates deal with ot)Jects or times, mid level with evarlts containing objects and times, higher level with activities composed of events. The slots in the tamplates contadn procedures which operate upon the output of the Definite Clause Grammar (DCG) to Instantlata the templatlm.We are currently using a corpus of spprcximstaF/ 125 messages In the S&M domain s~ a basis for developing a scenario for evaluation of the extended MATRES teatl~bed, ms well ms a teethed for a related knowledge baaed ayltem, the Active/Introspective information System [desCribed In Ruapini 1981, and Ruapinl 1982] for which MATRES serves an a front end. The scanmlo Involves two simulated nations, the Delta Col~edaratton of the Atlantic States and the Epsilon Repobllk:. Both netlone have space programs, and each Is interested in monitoring the technological progress of the other, using their own sataillte and sensor resources mid those of other friendly neUon8. The sot of fifessages to be analyzed by MATRES are mainly reports of space and satellite launches and orbital activities of the Delta Confederation, which are being monitored and evaluated by the Epsllon RepubUc. The text o1' messages used In the scenario has the structure and format of actual messages reporting on S&M activities, although the lexicon is substantially different.All discumled In several previous technical reports prepared under earlier contracts with RAOC (~Kuhns and Mantgamary 1973] , [Silva at el 19790] , [Silva et sl 1979b] ), the subset ot' the English language on which the text Of Intelligence messages 18 based Is essentislly a specialized language for reporting events. Intermixed with factual statements reporting on entitles and events, however, Is much evaluative commentary. Moreover, proms announcements of the Oeita Confederation are Included In the reports, and evaluative comments are made both about the events reported In the press announcements and the announcements themselves. In synthesizing data base elements from these messages, it is crucial to sort out these different levels of information.This paper defines an approach to identifying and label- Figure 2 , and summmtzed below. Before describing the event clmlflcatlon, however, It Is enlightening to review briefly soma example .:esaages In order to understand the motivation for this rather complex model Some mmsgas w for example, those encountered In our previotm research on the air activities domain --may report only primitive events. However, as noted shove and IlluatTerted in Figure 3 , s message may in fact be a report of • report--that Is, It may Include e report of an event by so.:e other source than the originator of the message. The "announcement m Is thus a row0rt of • • launch m event, which Is the basic or prlmiUve event being reported. The mmnrmunoementm ],., an event, but It Is clearly not on the same level as the primitive event. Rather, it is s report about the launch, s .:eta-event that Incidentally introduces s new Infatuation source of different credibility than the originator of the .:esaege.However, this dlsUnotJon alone Is not sufficient to account for the difference between the Initial two sentencea of the sxa.:pia message end the third sentence, whloh contains an evaluation of the announce.cent, stat-Ing that It wu characterized by "routine" wording. It Is thus an evalueUve com.:entary on the press snrmunce-,:ant of the launch event Since the announcement has been defined am s .:eta-event, the comment represents another mete-leveL In fact, In reviewing additional example8 of the .:emsege traffic in this scenario, it Is clear that, In order to accurately distill and represent Information contained in the text of these messages, the enalyl:lcal .:ethodo~ogy must Identify end uniquely label the following types of information: Figure 2 , which was deveiooed to account for the levels of content occurlng in the event-oriented message disooume, in this clanslficetion, there are two major types of events, mete meats and n~t-mete event I. Of the letter, events may bo al~lorvartkNlei, Or prlmltive. An obaervetlonaJ evenf~ la a direct perception of an event, which may be a visual percopUon (e.g., "observe", "sightS), Or In Ute cane of a sensor, an electronic measurement of the emitted energy charanterizlng the event. A primlt|ve event is thus a physical event of some kind which does not Involve an obaervaUon Or per~epUQn. Primitive evento may be attributive Or relatloeai. An attribuUve event doacriben a situation In which a particular entity hag • partiooiar attribute at a certain time or during a particular time Interval (other than the attribute location, which Is coveted under relational events), for example: WTerrex 534 operates In the high density mode". A relational event Involves entitles which stand In an n-sty relation with oath other st a certain Urea or during s fixed time period. The Importance of the subclasses of world point and world point qualification event8 la in defining the world lira of sn entity, say the track cf s ship or submarine. Of these distinctions, the moat relevant for this discussion ere those Involving mete-events and memete events, and of the latter, primiUve vef=us obaervetkmad events.In torero of the scenm'io described aDQve, a primitive ovwrt many occur, say, • aatelllte launch by tllo Delta Confederation, am Illustrated In Figure 4 . This event, like any other event, Involves the emission of energy. Such in omission 18 perceived by a sensing device of the El~ilml RopubllG. The device genefetoe (down arrow) a report of the giv|m event, In terms of the partlcuisr attributes of the event It la designed to measure. This sefmer report Is an obaervetlonaJ event, entailing an obaervetlml of a primitive evonL An S&M analyst for the Repubilk: ancemles (up arrow) this report, which contains digitized Information generated by the sensor, Ilttefll~to this Information ~1 • launch event, and Issues his own report about that event.HI8 rel~rt, which la an IntsrpretaUon of the primitive event baned on the obaervetionW event, Is s zeroth m~lm. mete-event= the common denominator of the ruesmlge traffic. At the same time, the Deltas may release an Internal report about the launch, which would also (mnstltlxte s zeroth order mote-event. Based on that aport, the Delta prim agency, NYT, may Issue an snnounomaent of the primitive event, the snnmmoemsnt thus e~nwUtutea s f~der mete-event. An Epsllon Republk: rq~rter may then make an Interpretation of. that ennota~enent~ In the form of a report, which being a report of 4 first-order mete-event --Is therefore m aecmtd ~ mete-event. Corrections or other changes made by El=~lllon reporters to these messages constitute a third mete level of reporting event, since they may reference reports of reports of events.The m(xlet thul fair accounts for the event reporting structure which undmtioa the Delta/Epsilon scenario, but we must slam acm~alt for the repo~ar's comments about the event --Le., his interpretation or evaluation of the event --which can occur at any of these levels.The moortor~s goaJ Is to Identify and denoribe all the relevant parameters of an event (exemplified by the ~ots In the template for a launch event, shown in the center of Figure 5 ) based on the observational report produced by the sensor and any other information he may hive (e.g., knowledge that a replacement of s nonfunctioning communications satellite is likely within a given time frame). However, if the reportor's Information Is Incomplete or Imprecise, he cannot exactJy describe the parameters of an event, but will give his best Interpretation of the event baaed on whet he knows. Thus he may report a launch of "an unidentified satellite", "a probable television support satellite", "s possible CE satellite". In some cases, ho may have enough Information to make a comperaUve evaluation with launch events which have occurTed In the pest: "a new ESV", "the second CE satellite to be successfully orbited by the Deitim this year". SUil another type of mete Infer- If a reporter'a InformaUon is good, I.e., complete and precise, the fo4iowing type of launch report i8 produced:the s~Jrce 18 the actual originator of the message. Thus, In the cane of the "designate: mete template, the mlnfoamm=e" of the designation Information (i.e., that the particular satellite launched from that site st that date end time hen been designated s space object calledMeg. g4-OOgHowever, when hi8 Informat~n Is Impremiao and hie Imowledge can add little to it, he must rmmrt to the qualified or mota-Gommented typos of messages dlm(~rJbed above.In order to accommodate such qualified end mete-~mmectary types of Informatkm, each event templets may have emaciated with It one or more mote templates contA~ing Interpretive or evaluative Information. Thus, 88 represented In Figure 5 , an Imitantiated launch tentplate produced frmn an obaervaUonal event and a primitive event (a zeroth order reporting event, ea illustrated in Figure 4 ) may have several additional quallficetlorm (exemplified by, hut not Ilmlted to, the mete templates llh~streted In the figure). So, for example, a meta evaltmttve template 2 mmoclated with • laundl template exl=reales the Epalion reporterJs degree of belief or m=nfidenco In the launch parameters he rep(x~a: the object in the event template 18 belaeved by the Epalion reporter to be a CE (or Crop Enhancement) satellite from the Infometion presented In the observational relx~ by the sensor, end from hht owvl knowledge of pest occurrences of CE satellite launches, as wail as expeateth:me of pG88iblc replacement ieunchee, etc., during p~rtlculer Use Intervals. All or none of the listed psi'easters for • launch event may be qualified In this way. Thus, in Figure 5 , the Epalk=n reporter believes that, to the best of hhl knowledge, the space object Involved In the major launch event 18 a "probable" CE or crop enhancement satellite, and that the time of launch IS "approximate[y" 113OZ.Each mete template ha8 field8 which Identify the source, am weft ms the Use and date of the Interpretive Informatill. As opposed to the "infosouroe= parameter of these mote templates --which shows the ulUmate source of the Information contained in the Instantlatad templets --2. The template and mats template structures shown in this figure are Intended to be Illustrative only: for example, the object, date/time group, and deorbit information constitute embedded templates linked to the main "launch" event template by pointers. In addition, there are several alternatives for more economical Internal representation of mete template Ioformation, which are currently under review for the ~'~uai design and Implementation o1' this information within MATRES and within the Actfvs/Irrm~pective InformaUon System (a knowledge-based Intelligent 88alatsnt, as menUonod above), which MATRES feeds.• Termx 584") Is NY'r, the new8 agency of the Delta Corffederetiofl, Indicating that this InformaUon came from in NYT proml wmounoement quoted (and Interpreted) by an Epalion reporter. This distinguishes such Information frmu that relmremented by the "mmignn mete template, where the Epellon Republic reporting staff ~8algn an Identmcetlon number of their own to the satellite payload far futwe referMme.Another significant analytical ~ of the Epallon reporter In thio suemmlo 18 the ¢oem=eratlve evaluation, XUuatrstod by the w~'omperQm mete template. These comparisons Involve overate which have takes place before, in thhs ¢:880, ImJnch evento, and/or obJecto Involved In such htun~ea. Ao In the example shme. In Figure 5 , the com-perim~ nuty specify an ewmt Involving the continuation of • satellite In am active status, where other such israelite8 an now Inactive (Jew[led compsr18on): e.g., ~'oft~Jx 584 141 the on~ first generation crop enhancemet sutoUlta which 18 mm~mtiy active." An I~ fINtOUolt of mite templates Is to represent pimdkrtJve In/~N~tJons I.o., descriptk:m8 of events expe,~ed In the future, based on other events which have ocmHyed In the pelt, or are currently In process. The "expect e template in Figure 5 expresses the presumable or expected pMsJlrleters of mission duration, and conaoqueRtly, the deorbit event which 18 mltlclpated for (~tober 2~To summarize, the function of the mete templates Is to Identify and delimit evaluative commentary, which isolates the factual InformaUon presented In most zeroth order mote event reports, and Identifies Information pertaining to credibility of the event occurrence, compere-bllll~/ with otilcr aim[Jar entitles and events, prediction8 of fUtlKe related ovento, etc.On the other hand, In addiUon to distinguishing the vsrl-o4J8 Iovehl of event occuffenGe, observation, and report-Ing, the fonotJon of the mete event structure Illustrated in Figures 2 and 4 Is to clearly demarcate the "Oetta versus Epallon" (in terms of the scenario described above) aspects of the messages. The reporters of the Epailoll Republic "assign" "Specold" and "WSJ" IdeITtlflcation numbers for space object Inventory purposes; the De[iN "denlgnate" their own apace objects with par*,Jc-tdaJ, clanaea of object names, e.G., "Tarrex 5~59". They • launch", "put Into orbit", "deorbit", "recover", etc., while the Epailon reporters "assess", determine "active" vs. "Inactive" cactus, attribute satellite =programs = and "medntenance" of such progreums, at(:., to the Delten.
null
null
null
null
Main paper: : Owigglns and Silva 1981.] The objective of this research has been to provide an automated capability to supplement the presently largely manuel, isbor-intermive task of meintainlng the currency of data bases which derive their InformaUon elements from the text of military rues=ages. Although some effort has been devoted to primarily Interactive approaches to the problem, and to mwmages which have highly predictable cniumnar summary formats, the majority of the research and development work has concentrated on the more difficult task of analyzing unformatted narrative l:axt with user Interaction limited to occasional sssistmlca to the automated system.A teethed system called MATRES has been constructed In Prolog to run under the UNiX operating system on the POP 11/70. MATRES Is a knowledge based system for undorotanding the natural language text of eventoriented massages tn the domains of air activities and space/missile (S&M) activities. The knowledge structures In MATRES, called "templates a, are essentially frames or scripts describing entities and events, which answer the military u~er's bsolc questions about these 1. This work has been carried out under the sponsorship of the Rome Air Development Center (RADC), U. S. Air Force Systems Command, Grlffiss Air :'~,-~e 8aBe, New York.MlenmmmL as Illustrated In the simplified view of an event template prlmentod In Figure 1 .The templatee are hierarchically organized; lower level templates deal with ot)Jects or times, mid level with evarlts containing objects and times, higher level with activities composed of events. The slots in the tamplates contadn procedures which operate upon the output of the Definite Clause Grammar (DCG) to Instantlata the templatlm.We are currently using a corpus of spprcximstaF/ 125 messages In the S&M domain s~ a basis for developing a scenario for evaluation of the extended MATRES teatl~bed, ms well ms a teethed for a related knowledge baaed ayltem, the Active/Introspective information System [desCribed In Ruapini 1981, and Ruapinl 1982] for which MATRES serves an a front end. The scanmlo Involves two simulated nations, the Delta Col~edaratton of the Atlantic States and the Epsilon Repobllk:. Both netlone have space programs, and each Is interested in monitoring the technological progress of the other, using their own sataillte and sensor resources mid those of other friendly neUon8. The sot of fifessages to be analyzed by MATRES are mainly reports of space and satellite launches and orbital activities of the Delta Confederation, which are being monitored and evaluated by the Epsllon RepubUc. The text o1' messages used In the scenario has the structure and format of actual messages reporting on S&M activities, although the lexicon is substantially different.All discumled In several previous technical reports prepared under earlier contracts with RAOC (~Kuhns and Mantgamary 1973] , [Silva at el 19790] , [Silva et sl 1979b] ), the subset ot' the English language on which the text Of Intelligence messages 18 based Is essentislly a specialized language for reporting events. Intermixed with factual statements reporting on entitles and events, however, Is much evaluative commentary. Moreover, proms announcements of the Oeita Confederation are Included In the reports, and evaluative comments are made both about the events reported In the press announcements and the announcements themselves. In synthesizing data base elements from these messages, it is crucial to sort out these different levels of information.This paper defines an approach to identifying and label- Figure 2 , and summmtzed below. Before describing the event clmlflcatlon, however, It Is enlightening to review briefly soma example .:esaages In order to understand the motivation for this rather complex model Some mmsgas w for example, those encountered In our previotm research on the air activities domain --may report only primitive events. However, as noted shove and IlluatTerted in Figure 3 , s message may in fact be a report of • report--that Is, It may Include e report of an event by so.:e other source than the originator of the message. The "announcement m Is thus a row0rt of • • launch m event, which Is the basic or prlmiUve event being reported. The mmnrmunoementm ],., an event, but It Is clearly not on the same level as the primitive event. Rather, it is s report about the launch, s .:eta-event that Incidentally introduces s new Infatuation source of different credibility than the originator of the .:esaege.However, this dlsUnotJon alone Is not sufficient to account for the difference between the Initial two sentencea of the sxa.:pia message end the third sentence, whloh contains an evaluation of the announce.cent, stat-Ing that It wu characterized by "routine" wording. It Is thus an evalueUve com.:entary on the press snrmunce-,:ant of the launch event Since the announcement has been defined am s .:eta-event, the comment represents another mete-leveL In fact, In reviewing additional example8 of the .:emsege traffic in this scenario, it Is clear that, In order to accurately distill and represent Information contained in the text of these messages, the enalyl:lcal .:ethodo~ogy must Identify end uniquely label the following types of information: Figure 2 , which was deveiooed to account for the levels of content occurlng in the event-oriented message disooume, in this clanslficetion, there are two major types of events, mete meats and n~t-mete event I. Of the letter, events may bo al~lorvartkNlei, Or prlmltive. An obaervetlonaJ evenf~ la a direct perception of an event, which may be a visual percopUon (e.g., "observe", "sightS), Or In Ute cane of a sensor, an electronic measurement of the emitted energy charanterizlng the event. A primlt|ve event is thus a physical event of some kind which does not Involve an obaervaUon Or per~epUQn. Primitive evento may be attributive Or relatloeai. An attribuUve event doacriben a situation In which a particular entity hag • partiooiar attribute at a certain time or during a particular time Interval (other than the attribute location, which Is coveted under relational events), for example: WTerrex 534 operates In the high density mode". A relational event Involves entitles which stand In an n-sty relation with oath other st a certain Urea or during s fixed time period. The Importance of the subclasses of world point and world point qualification event8 la in defining the world lira of sn entity, say the track cf s ship or submarine. Of these distinctions, the moat relevant for this discussion ere those Involving mete-events and memete events, and of the latter, primiUve vef=us obaervetkmad events.In torero of the scenm'io described aDQve, a primitive ovwrt many occur, say, • aatelllte launch by tllo Delta Confederation, am Illustrated In Figure 4 . This event, like any other event, Involves the emission of energy. Such in omission 18 perceived by a sensing device of the El~ilml RopubllG. The device genefetoe (down arrow) a report of the giv|m event, In terms of the partlcuisr attributes of the event It la designed to measure. This sefmer report Is an obaervetlonaJ event, entailing an obaervetlml of a primitive evonL An S&M analyst for the Repubilk: ancemles (up arrow) this report, which contains digitized Information generated by the sensor, Ilttefll~to this Information ~1 • launch event, and Issues his own report about that event.HI8 rel~rt, which la an IntsrpretaUon of the primitive event baned on the obaervetionW event, Is s zeroth m~lm. mete-event= the common denominator of the ruesmlge traffic. At the same time, the Deltas may release an Internal report about the launch, which would also (mnstltlxte s zeroth order mote-event. Based on that aport, the Delta prim agency, NYT, may Issue an snnounomaent of the primitive event, the snnmmoemsnt thus e~nwUtutea s f~der mete-event. An Epsllon Republk: rq~rter may then make an Interpretation of. that ennota~enent~ In the form of a report, which being a report of 4 first-order mete-event --Is therefore m aecmtd ~ mete-event. Corrections or other changes made by El=~lllon reporters to these messages constitute a third mete level of reporting event, since they may reference reports of reports of events.The m(xlet thul fair accounts for the event reporting structure which undmtioa the Delta/Epsilon scenario, but we must slam acm~alt for the repo~ar's comments about the event --Le., his interpretation or evaluation of the event --which can occur at any of these levels.The moortor~s goaJ Is to Identify and denoribe all the relevant parameters of an event (exemplified by the ~ots In the template for a launch event, shown in the center of Figure 5 ) based on the observational report produced by the sensor and any other information he may hive (e.g., knowledge that a replacement of s nonfunctioning communications satellite is likely within a given time frame). However, if the reportor's Information Is Incomplete or Imprecise, he cannot exactJy describe the parameters of an event, but will give his best Interpretation of the event baaed on whet he knows. Thus he may report a launch of "an unidentified satellite", "a probable television support satellite", "s possible CE satellite". In some cases, ho may have enough Information to make a comperaUve evaluation with launch events which have occurTed In the pest: "a new ESV", "the second CE satellite to be successfully orbited by the Deitim this year". SUil another type of mete Infer- If a reporter'a InformaUon is good, I.e., complete and precise, the fo4iowing type of launch report i8 produced:the s~Jrce 18 the actual originator of the message. Thus, In the cane of the "designate: mete template, the mlnfoamm=e" of the designation Information (i.e., that the particular satellite launched from that site st that date end time hen been designated s space object calledMeg. g4-OOgHowever, when hi8 Informat~n Is Impremiao and hie Imowledge can add little to it, he must rmmrt to the qualified or mota-Gommented typos of messages dlm(~rJbed above.In order to accommodate such qualified end mete-~mmectary types of Informatkm, each event templets may have emaciated with It one or more mote templates contA~ing Interpretive or evaluative Information. Thus, 88 represented In Figure 5 , an Imitantiated launch tentplate produced frmn an obaervaUonal event and a primitive event (a zeroth order reporting event, ea illustrated in Figure 4 ) may have several additional quallficetlorm (exemplified by, hut not Ilmlted to, the mete templates llh~streted In the figure). So, for example, a meta evaltmttve template 2 mmoclated with • laundl template exl=reales the Epalion reporterJs degree of belief or m=nfidenco In the launch parameters he rep(x~a: the object in the event template 18 belaeved by the Epalion reporter to be a CE (or Crop Enhancement) satellite from the Infometion presented In the observational relx~ by the sensor, end from hht owvl knowledge of pest occurrences of CE satellite launches, as wail as expeateth:me of pG88iblc replacement ieunchee, etc., during p~rtlculer Use Intervals. All or none of the listed psi'easters for • launch event may be qualified In this way. Thus, in Figure 5 , the Epalk=n reporter believes that, to the best of hhl knowledge, the space object Involved In the major launch event 18 a "probable" CE or crop enhancement satellite, and that the time of launch IS "approximate[y" 113OZ.Each mete template ha8 field8 which Identify the source, am weft ms the Use and date of the Interpretive Informatill. As opposed to the "infosouroe= parameter of these mote templates --which shows the ulUmate source of the Information contained in the Instantlatad templets --2. The template and mats template structures shown in this figure are Intended to be Illustrative only: for example, the object, date/time group, and deorbit information constitute embedded templates linked to the main "launch" event template by pointers. In addition, there are several alternatives for more economical Internal representation of mete template Ioformation, which are currently under review for the ~'~uai design and Implementation o1' this information within MATRES and within the Actfvs/Irrm~pective InformaUon System (a knowledge-based Intelligent 88alatsnt, as menUonod above), which MATRES feeds.• Termx 584") Is NY'r, the new8 agency of the Delta Corffederetiofl, Indicating that this InformaUon came from in NYT proml wmounoement quoted (and Interpreted) by an Epalion reporter. This distinguishes such Information frmu that relmremented by the "mmignn mete template, where the Epellon Republic reporting staff ~8algn an Identmcetlon number of their own to the satellite payload far futwe referMme.Another significant analytical ~ of the Epallon reporter In thio suemmlo 18 the ¢oem=eratlve evaluation, XUuatrstod by the w~'omperQm mete template. These comparisons Involve overate which have takes place before, in thhs ¢:880, ImJnch evento, and/or obJecto Involved In such htun~ea. Ao In the example shme. In Figure 5 , the com-perim~ nuty specify an ewmt Involving the continuation of • satellite In am active status, where other such israelite8 an now Inactive (Jew[led compsr18on): e.g., ~'oft~Jx 584 141 the on~ first generation crop enhancemet sutoUlta which 18 mm~mtiy active." An I~ fINtOUolt of mite templates Is to represent pimdkrtJve In/~N~tJons I.o., descriptk:m8 of events expe,~ed In the future, based on other events which have ocmHyed In the pelt, or are currently In process. The "expect e template in Figure 5 expresses the presumable or expected pMsJlrleters of mission duration, and conaoqueRtly, the deorbit event which 18 mltlclpated for (~tober 2~To summarize, the function of the mete templates Is to Identify and delimit evaluative commentary, which isolates the factual InformaUon presented In most zeroth order mote event reports, and Identifies Information pertaining to credibility of the event occurrence, compere-bllll~/ with otilcr aim[Jar entitles and events, prediction8 of fUtlKe related ovento, etc.On the other hand, In addiUon to distinguishing the vsrl-o4J8 Iovehl of event occuffenGe, observation, and report-Ing, the fonotJon of the mete event structure Illustrated in Figures 2 and 4 Is to clearly demarcate the "Oetta versus Epallon" (in terms of the scenario described above) aspects of the messages. The reporters of the Epailoll Republic "assign" "Specold" and "WSJ" IdeITtlflcation numbers for space object Inventory purposes; the De[iN "denlgnate" their own apace objects with par*,Jc-tdaJ, clanaea of object names, e.G., "Tarrex 5~59". They • launch", "put Into orbit", "deorbit", "recover", etc., while the Epailon reporters "assess", determine "active" vs. "Inactive" cactus, attribute satellite =programs = and "medntenance" of such progreums, at(:., to the Delten. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
504
0.013889
null
null
null
null
null
null
null
null
f8651bcdc9118ff692d8a560a39d5cac5d57dc90
8800652
null
Scruffy Text Understanding: Design and Implementation of the {NOMAD} System
The task of understanding unedited naval ship-co-shore messages is implemented in the presence of a large database of domain specific knowledge. The program uses internal syntactic and semantic expectations to analyze the texts and co correct errors that arise during understanding, such as syntactic errors, missing punctuation, and errors of spelling and usage. The output of the system is a veil-formed English translation of the message. This paper describes some of the knowledge mechanisms that have been implemented in the NOMAD system.
{ "name": [ "Granger, Richard H. and", "Staros, Chris J. and", "Taylor, Gregory S. and", "Yoshii, Rika" ], "affiliation": [ null, null, null, null ] }
null
null
First Conference on Applied Natural Language Processing
1983-02-01
5
7
null
Consider the following message, LOCKED ON OPEN FIRED DIW. This is an actual naval message containing sentence boundary problems, missing subjects and objects, an incorrect verb conjugation, and an abbreviation for "dead in water." The NAVY receives many thousands of short messages like the one above in very "scruffy" form, and these messages have =o be put into a more readable form before they can be passed through many hands.Hence there is an obvious benefit co partially automating this encoding process.Most large text-understanding systems today would not be able to automate the encoding process mentioned above because they were designed under the assumption chat the input text consists of well-formed and logical sentences such as newspaper stories and other edited texts. Here, the word "SAW" as a conjugation of "SEE" would give arise to expectations related to detection and identification.The inferencer also uses knowledge about typical sequences of events (identify before fire) (Cullingford, 1977) and relationships between their participants (friend and foe).Examine the following massage,CONTACT GAINED ON KASHIN.The example can be interpreted as either "Contact was gained on Kashin" meanin K '~e contacted Kashin" or "Our contact (e ship) made heading towards Kashin." NOMAD picks one of the multiple maaninEs of the ~i&uous word, and calls a blame assignzenC module to check for goal violations, physical impossibilities, and other semantic conflicts to make sure that the interpretation was correct. If the module detects any conflict, NOMAD attempts Co understand the sentence using a a different meaning of the ambiguous word.Missing sentence and c~ause boundaTies because there are no expecacione pending when "AND" is read."TRACKinK" is understood co be the verb of the second sentence.With a verb chosen and expecations for an actor pending, °CRALLZNCED"is used as an adverb describing "UNIT'. The second phrase ends before "NO REPLY ..."as again there ere no expecacions pending aC chin point.The phrase "NO REPLY" has expectations for communication verbs to follow it, and thus when the clause "OPEN FIRED" is encountered, the final sentence boundr 7 is identified.Consider the following fragment sentence from our first example, OPEN FIRED.The ~orphological analyzer is used also to correct the tense of a word.eg. OPEN FIRED --> OPEN FIRE. The script-based inferencer then determines the tense of the given action using its knowledge about typical sequences of events, eg. LOCKED ON. OPEN FIRED. --> LOCKED ON. OPENED FIRE.NOMAD uses a generator specifically designed for the naval domain co produce a yell formed translation of the input message.This "pretty" form of the input message is checked by a user to 105 assure chaC NOMAD has correctly understood the message.If NOMAD is then told it has incorrectly understood the message, alternative word definitions and ocher semantic choices are made in a second attempt at understanding.A.The generator has been tailored to address some of the problems that occur in describing naval activities [Taylor, 1982] . Many of the messages are characterized by incomplete end changing descriptions of actors. These descriptions contain informationChat may be unknown but doesn't change (name, type of ship, etc.) along with temporal information (location and damage status). The NOMAD generator produces unambiguous descriptions of actors while uaincaining brevity.The following is an actual example showing the cnpabiliCy of NOMAD to handle multiple problems found ~ a message.PERISCOPE SIGHTED BY CONSTELLATION ABT 2000 YDS OFF POET (~I~, AND HS ~ELO VECTEED TO DATUM. GREEN I~J,RS SIGHTED AYTZR VISUAL ON PERISCOPE. HS GAIN~DACTZVE CTCAND CONDUCTED TWo ATTACKS.Two possible paraphrases are generated:P~EI :The Constellation identified an enemy submarine that van at 225 degrees 2000 miles from their location.A helicopter-squadron pursued the enemy submarine.The helicopter-squadron identified some green flares.By using an active sonar, the helicopter-squadron identified the enemy submarine, and they fired twice at the enemy submarine.The Constellation identified an enemy submarine that was at 225 degrees 2000 miles from their location. A helicopter-squadron pursued the enemy submarine.The Constellation identified some green flares.By using an active sonar, the helicopter-squadron identified the enemy submarine, and they fired twice at the enemy submarine.The ,.-in difference chat is shown in the paraphrases is the identity of the subject of the second sentence.NOMAD gives preference in this case to the second paraphrase because "AFTER VISUAL ON PEEISCOPE" implies that the subject of the second sentence is the same as in the first sentence.However, the user is given the final choice.The ability to understand text is dependent on the ability to understand what is being described in the text.~ence, a reader of, say, English text must have applicable knowledge of both the situations chat may be described in texts (e.g., actions, scares, sequences of events, goals, methods of achieving goals, etc.) and the the surface structures that appear in the language, i.e., the relations between the surface order of appearance of words and phrases, and their corresponding meaning structures.The process of text understanding is the combined application of these knowledge sources as a reader proceeds through a text.This fact becomes clearest when we investigate the understanding of texts that present particular problems to a reader.Human understanding is inherently tolerant;people are naturally able to ignore mtny types of error|, omissions, poor constructions, etc., and get straight to the meaning of the text.Our theories have tried to take this ability into account by including knowledge and mechanisms of error noticing and correcting as implicit parts of our process models of language understanding. The NOMAD system is the latest in a line of "tolerant* language understanders, beginning with FOUL-UP, all based on the use of knowledge of syntax, semantics and pragmatics at all stages of the understanding process to cope with errors.
null
null
null
null
Main paper: introduction: Consider the following message, LOCKED ON OPEN FIRED DIW. This is an actual naval message containing sentence boundary problems, missing subjects and objects, an incorrect verb conjugation, and an abbreviation for "dead in water." The NAVY receives many thousands of short messages like the one above in very "scruffy" form, and these messages have =o be put into a more readable form before they can be passed through many hands.Hence there is an obvious benefit co partially automating this encoding process.Most large text-understanding systems today would not be able to automate the encoding process mentioned above because they were designed under the assumption chat the input text consists of well-formed and logical sentences such as newspaper stories and other edited texts. Here, the word "SAW" as a conjugation of "SEE" would give arise to expectations related to detection and identification.The inferencer also uses knowledge about typical sequences of events (identify before fire) (Cullingford, 1977) and relationships between their participants (friend and foe).Examine the following massage,CONTACT GAINED ON KASHIN.The example can be interpreted as either "Contact was gained on Kashin" meanin K '~e contacted Kashin" or "Our contact (e ship) made heading towards Kashin." NOMAD picks one of the multiple maaninEs of the ~i&uous word, and calls a blame assignzenC module to check for goal violations, physical impossibilities, and other semantic conflicts to make sure that the interpretation was correct. If the module detects any conflict, NOMAD attempts Co understand the sentence using a a different meaning of the ambiguous word.Missing sentence and c~ause boundaTies because there are no expecacione pending when "AND" is read."TRACKinK" is understood co be the verb of the second sentence.With a verb chosen and expecations for an actor pending, °CRALLZNCED"is used as an adverb describing "UNIT'. The second phrase ends before "NO REPLY ..."as again there ere no expecacions pending aC chin point.The phrase "NO REPLY" has expectations for communication verbs to follow it, and thus when the clause "OPEN FIRED" is encountered, the final sentence boundr 7 is identified.Consider the following fragment sentence from our first example, OPEN FIRED.The ~orphological analyzer is used also to correct the tense of a word.eg. OPEN FIRED --> OPEN FIRE. The script-based inferencer then determines the tense of the given action using its knowledge about typical sequences of events, eg. LOCKED ON. OPEN FIRED. --> LOCKED ON. OPENED FIRE.NOMAD uses a generator specifically designed for the naval domain co produce a yell formed translation of the input message.This "pretty" form of the input message is checked by a user to 105 assure chaC NOMAD has correctly understood the message.If NOMAD is then told it has incorrectly understood the message, alternative word definitions and ocher semantic choices are made in a second attempt at understanding.A.The generator has been tailored to address some of the problems that occur in describing naval activities [Taylor, 1982] . Many of the messages are characterized by incomplete end changing descriptions of actors. These descriptions contain informationChat may be unknown but doesn't change (name, type of ship, etc.) along with temporal information (location and damage status). The NOMAD generator produces unambiguous descriptions of actors while uaincaining brevity.The following is an actual example showing the cnpabiliCy of NOMAD to handle multiple problems found ~ a message.PERISCOPE SIGHTED BY CONSTELLATION ABT 2000 YDS OFF POET (~I~, AND HS ~ELO VECTEED TO DATUM. GREEN I~J,RS SIGHTED AYTZR VISUAL ON PERISCOPE. HS GAIN~DACTZVE CTCAND CONDUCTED TWo ATTACKS.Two possible paraphrases are generated:P~EI :The Constellation identified an enemy submarine that van at 225 degrees 2000 miles from their location.A helicopter-squadron pursued the enemy submarine.The helicopter-squadron identified some green flares.By using an active sonar, the helicopter-squadron identified the enemy submarine, and they fired twice at the enemy submarine.The Constellation identified an enemy submarine that was at 225 degrees 2000 miles from their location. A helicopter-squadron pursued the enemy submarine.The Constellation identified some green flares.By using an active sonar, the helicopter-squadron identified the enemy submarine, and they fired twice at the enemy submarine.The ,.-in difference chat is shown in the paraphrases is the identity of the subject of the second sentence.NOMAD gives preference in this case to the second paraphrase because "AFTER VISUAL ON PEEISCOPE" implies that the subject of the second sentence is the same as in the first sentence.However, the user is given the final choice.The ability to understand text is dependent on the ability to understand what is being described in the text.~ence, a reader of, say, English text must have applicable knowledge of both the situations chat may be described in texts (e.g., actions, scares, sequences of events, goals, methods of achieving goals, etc.) and the the surface structures that appear in the language, i.e., the relations between the surface order of appearance of words and phrases, and their corresponding meaning structures.The process of text understanding is the combined application of these knowledge sources as a reader proceeds through a text.This fact becomes clearest when we investigate the understanding of texts that present particular problems to a reader.Human understanding is inherently tolerant;people are naturally able to ignore mtny types of error|, omissions, poor constructions, etc., and get straight to the meaning of the text.Our theories have tried to take this ability into account by including knowledge and mechanisms of error noticing and correcting as implicit parts of our process models of language understanding. The NOMAD system is the latest in a line of "tolerant* language understanders, beginning with FOUL-UP, all based on the use of knowledge of syntax, semantics and pragmatics at all stages of the understanding process to cope with errors. Appendix:
null
null
null
null
{ "paperhash": [ "granger|foul-up:_a_program_that_figures_out_meanings_of_words_from_context", "cullingford|controlling_inference_in_story_understanding", "wilensky|understanding_goal-based_stories" ], "title": [ "FOUL-UP: A Program that Figures Out Meanings of Words from Context", "Controlling Inference in Story Understanding", "Understanding Goal-Based Stories" ], "abstract": [ "The inferencing task of figuring out words from context is implemented in the presence of a large database of world knowledge. The program does not require interaction with the user, but rather uses internal parser expectations and knowledge embodied in scripts to figure out likely definitions for unknown words, and to create context-specific definitions for such words.", "SAM r e a d s s t o r i e s l i k e t h i s by i n t r o d u c i n g a f r a m e l i k e d a t a s t r u c t u r e c a l l e d a S c r i p t [ 2 ] when the f i r s t sen tence i s a n a l y z e d , and b y f i n d i n g subsequent i n p u t s i n t h i s c o n t e x t v i a t he e x p e c t a t i o n s t h a t a r e p r o g r e s s i v e l y a r o u s e d . The r e c o g n i t i o n p rocess i s d r i v e n b y a p a t t e r n m a t c h o f t h e i n p u t c o n c e p t u a l i z a t i o n a g a i n s t a t e m p l a t e s t o r e d i n the S c r i p t . (SAM works i n t e r n a l l y w i t h meaning s t r u c t u r e s coded i n the Concep tua l Dependency sys tem [ 2 ] . )", "Abstract : Reading requires reasoning. A reader often needs to infer connections between the sentences of a text and must therefore be capable of reasoning about the situations to which the text refers. People can reason about situations because they posses a vast store of knowledge which they can use to infer implicit parts of a situation from those aspects of the situation explicitly described by a text. PAM (Plan Applier Mechanism) is a computer program that understands stories by reasoning about the situations they reference. PAM reads stories in English and produces representations for the stories that include the inferences needed to connect each story's events. To demonstrate that it has understood a story, PAM answers questions about the story and expresses the story from several points of view. PAM reasons about the motives of a story's characters. Many inferences needed for story understanding are concerned with finding explanations for events in the story. PAM has a great deal of knowledge about people's goals which it applies to find explanations for the actions taken by a story's characters in terms of that character's goals and plans." ], "authors": [ { "name": [ "R. Granger" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. E. Cullingford" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Wilensky" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null ], "s2_corpus_id": [ "9255668", "29714084", "9899836" ], "intents": [ [ "background" ], [ "background" ], [] ], "isInfluential": [ true, false, false ] }
null
504
0.013889
null
null
null
null
null
null
null
null
5549b2850104b513d3d2a3606502d8b6f441a395
35952240
null
An Application of {M}ontague Grammar to {E}nglish-{J}apanese Machine Translation
English-Japanese machine translation requires a large amount of structural transformations in both grammatical and conceptual level. In order to make its control structure clearer and more understandable, this paper proposes a model based on Montague Gramamr. Translation process is modeled as a data flow computation process. Formal description tools are developed and a prototype system is constructed. Various problems which arise in this modeling and their solutions are described. Results of experiments are shown and it is discussed how far initial goals are achieved. I. GOAL OF INTERMEDIATE REPRESENTATION DESIGN Differences between English and Japanese exist not only in grammatical level but also in conceptual level.
{ "name": [ "Nishida, Toyoaki and", "Doshita, Shuji" ], "affiliation": [ null, null ] }
null
null
First Conference on Applied Natural Language Processing
1983-02-01
14
14
null
null
null
This section outlines our solution Co the requirements posed in the preceding section.We employ MonCague Gram=mr (HonCague 1974, Dowry 1981) as a theoretical basis of translation model. Inter~edlate representation is designed based on intensional logic.Intermediate representation for a given natural language expression is obtained by what we call functional analysis.In functional analysis, input sentence is decomposed into groups of constituents and interrelationship among those groups are analyzed in terms of function-argument relationships. Suppose a sentence:EQUATIONThe functional analysis makes following two points:a) (L) is decomposed as:"I have a book" ÷ "nOt".(2) b) In the decomposition (2), "not" is an operator or function co "I have a book."The result of this analysis can be depicted as follows:~ ""I have a book" I (3)wherel >denotes a function and[ Idenotes en argument. The role of "not" as a function is:"not" as a semantic operstor: it negates a given proposition; "not" is a syntactic operator:it inserts an appropriate auxiliary verb and = lexical item "not" into appropriate position of its argument.This kind of analysis goes on further with embedded sentence until it is decomposed into lexical units or even morphemes.Montague Grammar (MG) gives a basis of functlonel analysis.One of the advantages of MG consists in its interpretation system of function form (or intensional logical form).In MG, interpretation of an intenelonal logical formula is a mapping I from incenaional logical formulas to set theoretical domain.Important property is chat this ampping I is defined under the cons-trainC of compositlonality, that is, I satisfies: A For the sake of property (5), ~he interpretation of (6) is done as a data flow computation process as followa:EQUATIONA ~I[A] , | A "I Its c O } ~7)By this property, we can easily grasp the processing stream.In particular, we can easily ~hooc trouble and source of abnormality when debugging a system. Due to the above property and others, Ln particular due to its rigorous framework based .)n Logic, MG has been studied in ~nformation science field (Hobbs 1978 , Friedman |978, Yonezaki [980, Nishida 1980 , Landsbergen 1980 , Moran 1982 , Moore 1981 , Rosenschein 1982 .Application of MG to machine translation was also attempted (Hauenschild 1979 , Landsbergen 1982 , but those systems have only partially utilized the power of MG. Our approach attempts to utilize the full power of MGoIn order to obtain the syntactic structure in Japanese from an intensional logical form, in the same way as interpretation process of MC, we change the semantic domain from set theoretical domain to conceptual domain for Japanese.Each conceptual unit contains its syntactic expression in Japanese.Syntactic aspect is stressed for generating syntactic structure in Japanese.Conceptual information is utilized for semantic based word choice end paraphrasing.For example, the following function in Japanese syntactic domain is assigned to • logical item "not":EQUATION3.1 Definition of Formal Tools e) English oriented Formal Representation (EFR) is a version of intensional logic, and gives a rigorous formalism for describing the results of functional analysis.It is based on Cresswell's lambda deep structure (Cresawell 1973) . Each expression has a uniquely defined type. Lambda form is employed to denote function itself. b) Conceptual Phrase Structure (CPS) is a data structure in which syntactic and semantic information of a Japanese lexicel unit or phrase structure are packed.i) example of CPS for a lexical item:EQUATIONcategory; lexical item; conceptual info.; "EIGO" means English" language.ii) example of CPS for phrase structure:[NP [ADJ "AKAI" with ... ] [NOUN "RINGO" with ... ] with ... ] (i0)Transfer-generation process for the sentence (1) looks like:"I don't have a book" ~',,I have a book" I // • TRANSFER / (LAMBDA (x) {SENTENCE x [AUX "NAI"]}) TRANS FE R, GENE RAT I ON S WATASHI-WA HON-WO MOTSU ,,-..._./ S S AUX; "AKAI" means red, and "RINGO" means apple. c) CPS Form (CPSF) is a form which denotes operation or function on CPS domain.It is used to give descriptions to mappings from EFR to CPS.i) Constants: CPS. ii) Variables:x, y, ... . (indicated by lower case strings).iii) Variables with constraints: e.g., (! SENTENCE x).; variable x which must be of category SENTENCE. .. CPSF ..• . CPS ..Prefix notation is used for CPSF, described using Formal Tools. / and syntactic aspect is emphasized.stage 3 (generation): evaluates the CPSF to get CPS; generation of surface structure from CPS is straightforward.In order to give readers an overall perspective, we illustrate an example in Fig.2 . Note that the example illustrated includes partial negation.Thus operator "not" is given a wider scope than "always".In the remaining part of this section we will describe how to extract EFR expression from a given sentence. Then we will discuss the problem which arises in evaluating CPSF, and give its possible solution.Rules for translating English into EFR form in .~ssociated with each phrase structure rules.For example, the rule looks llke:NP -> DET+NOUN where <NP>-<DET>(<NOUN>) (ii)where, <NP> stands for an EFR form assigned tu ~he NP node, etc. Rule (II) says chat EFR for an NP is a form whose function section is EFR for a DET node and whose argument section is EFR for a NOUN node. This rule can be incorporated into conventional natural language parser.Evaluation process of CPSF is a sequence of lambda conversions and tree ~ransformations. Evaluation of CPSF is done by a LISP ~ncerpreter-l i ke al gori t hm. A pr obl em whi ch we cal l hi gher order problem arose in designing the evaluation algorithm.By higher order property we mean that there exist functions which take other functions as arguments (Henderson 1980) . CPSF in fact has this property.For example, an adjective "large" is modeled as a function which takes a noun as its argument.For example, large(database), "large database"On the other hand, adverbs are modeled as functions to adjectives, For example, very(large), extremely(large), comparatively(large), etc.The difficulty with higher order functions consists in modifiction to function. For explanation, let our temporal goal be regeneration of English from EFR.Suppose we assign to "large" a lambde form like:EQUATIONwhich takes a noun and returns a complex noun by attaching an adjective "large". If the adjective is modified by an adverb, say "very", we have to modify (14); we have to transform (14) into a lambda form like:EQUATIONwhich attaches a complex adjective "very large" to a given noun. As is easily expected, it is too tedious or even impossible to do this task in general. Accordingly, we take an alternative assignment instead of (14), namely:EQUATIONSince this decision cuases a form:EQUATIONto be created in the course of evaluation, we specify what to do in such case. The rule is defiend as follows: EQUATIONEQUATION; which may read: is y:[there is a uniquely specified object y referred to by an NP "the table", such that y is a block which is restricted to be located on x.]This lambda form is too complicated for tree transformation procedure to manipulate.So it should be transformed into equivalent CPS if it exists.The type of the lambda form is known from the context, namely one-place predicate. So if we apply the lambda form (20) to "known" entity, say "it", we can obtain sentence structure like: EQUATIONThe extraction rule can be written as a pattern matching rule like:EQUATIONThis rule is called an application rule.In general, evaluation of [ambda form itself results in a function value (function as a value).This causes difficulty as mentioned above. Unfortunately, we can't dispense with lambda forms; lambda variables are needed to link gap and its antecedent in relative clause, verb and its dependants (subject, object, etc), preposition and its object, etc. For example, in our model, an complex noun modified by a PP: "block on the table"£s assigned a following EFR:Of course, this way of processing is not desirable; it introduces extra complexity. But this is a trade off of employing formal semantics; the same sort of processing is also done rather opaque procedures in conventional MT system.This section illustrates how English-Japanese translation process is modeled using formal tools.Firstly, how several basic linguistic constructions are treated is described and then mechanism for word choice is presented.a) Sentence: sentence consists of an NP and a VF. VP is analyzed as a one-place predicate, which constructs a proposition out of an individual referred Co by the subject.VP is further decomposed into intransitive verb or cranaltive verb + object.Intransitive verbs and transitive verbs ere analyzed as one-place predicates and two-place predicate, respectively.One-place predicate and two-place predicate are assigned a CFSF function which generates a sentence ouc of an individual and chat which generates a sentence out of a pair of individuals, respectively.Thus, a transitive verb "constructs" is assigned a CPSF form:EQUATION; given two individuals, this function attaches co each argument a case marker (corresponding to JOSHI or Japanese postfix) and then generates a sentence structure.The assignment (24) may be extended later to incorporate word choice mechanism.Treatment of NP in MonCague-besed semantics is significant in chat EFR expression for an NP is given a wider scope then Chat for a VP. Thus the EFR form for an ~P-VP construction looks llke:EQUATIONwhere <x> means EFR form for x, x=NP,... .English quantifier which is syntactically local but semantically global.For example, first order logical form for a sentence:"this command needs no operand" (267 looks Like:EQUATIONwhere operator "not", which comes from a determiner "no", is given a wider scope than "needs". This translation is straightforward in our model; the following EFR is extracted from (26):EQUATION[f we make appropriate assignment including:EQUATIONwe can get (27) from (28).In Engllsh-Japanese -,-'chine translation, this treatment gives an elegant solution to the :ranalation of prenominal negation, partial negation, etc.Since Japanese language does not have a synCactlc device for prenominal negation, "no" must be translated into asainly two separate constituents: one is a RENTAISHI (Japanese decerminer) and another is an auxiliary verb of negation.One possible assignment of CFSF looks like: ...)).EQUATIONBy <MOD£FIER> we mean modification to noun by adjectives, prepositional phrases, infinitives, present/past particles, etc. The translation process is determined by a CPSF assigned co <DET>, En cases of "the" or "a/an", translation process is abic complicated. Et is almost the same as the process described in detail in section 3: firstly the <MODIFIER>s and <NOUN> are applied Co an individual like "the chinE" (the) or "some-chinE" (a/an) and a sentence will be obtained; then a noun structure is extracted and appropriate RENTAISHI or Japanese determiner is attached. c) Other cases: some ocher cases are illustrated by examples in Fig.3 .• In order to obtain high quality translation, word choice .~chanism must be incorporated at least for handling the cases like: i) subordinate clause:"When SI, S2" & (when (<SI >) ) (<$2>) "TOKI" [$I] [[SI] "TOKI 's] [$2] [[Sl] "TOKI" [S2]]2) tense, aspect, modal: "I bought a car" did(<I buy a car>) "TA" "WATASHI-WA JIDOUSHA-WO KAU" "WATASHI-WA JIDOUSHA-WO KAU TA" ; indirect question is generated first, then it is transformed into a sentence. Construction. <x>, {x}, [x] and "x" stand for EFR for x, CPSF for x, CPS for x, and CPB for Japanese string x, respectively. verb in accordance with its object or its agent, adjective-noun, adverb-verb, and preposition.Word choice is partially solved in the analysis phase as a word meaning disambiguation.So the design problem [s to determine to what degree word sense is disamblguated in the analysis phase and what kind of ambiguities is left until transfer-generation phase.Suppose we are to translate a given preposition.The occurence of a preposition [s classified as:(a) when it is governed by verbs or nouns:(a-l) when governmant is strong: e.g., study on, belong to, provide for; (a-2) when govern.ment is weak: e.g., buy ... at store; (b) otherwise:(b-I) idiomatic: e.g., in particular, in addition; (b-2) related to its object: e.g., by bus, with high probability, without÷ING.We treat (a) and (b-l) as an analysis problem and handle them in the analysis phase. (b-2) is more difficult and is treated in the transfergeneration phase where partial semantic interpretation [s done.Word choice in transfer-generatlon phase is done by using, conditional expression and attributive information included in CPS. For example, a transitive verb "develop" is translated differently according to its object: develop ~ (* system) ... KAINATSU-SURU t (+ film) GENZOU-SURU.The following assignment of CPSF makes this choice poss ib le :deve lop <= (LAMBDA (x y) [(CLASS y)=SYSTEM -> ("x-GA y-WO KAIHATSU-SURU"} ; (CLASS y)-FILM ->("x-GA y-WO GENZOU-SURU"};EQUATIONoperating-syStem <-[NOUN "OS" with CLASS-system; ... ], (36) film <-[NOUN "FUILUMU" with CLASS-film; ... 1.To make this type of processing possible in the cases where the deep object is moved from surface In EFR level, lambda variable x is explicitly used as a place holder for the gap.A functor "which" dominates both the EFR for the embedded sentence and that for the head noun. A CPSF assigned to the functor "which" sends conceptual information of the head noun to the gap as follows: firstly it creates a null NF out of the head noun, then the null NP is substituted into the lambda variable for the gap.In word choice or semantic based translation in general, various kinds of transformations are carried out on target language structure. For example,her arrival makes him happy, We have constructed a prototype system.It is slmplified then practical system in:-it has only limited vocabulary, Sample texts are taken from real computer manuals or abstracts of computer journals. Initially, four sample texts (40 sentences) are chosen. Currently it is extended to I0 texts (72 sentences).Additional features are introduced Ln order to make the system more practical. a) Parser: declarative rules are inefficient for dealing with sentences in real cexts. The parser uses production type rules each of which is classified according to its invocation condition.Declarative rules are manually converted into this rule type. b) Automatic postedicor: transfer process defined so far concentrates on local processings. Even if certain kinds of ambiguities are resolved in this phase, there still remains a possibility that new ambiguity is introduced in generation phase. Instead of incorporating into the transfer-generation phase a sophisticated mechanism for filtering out ambiguities, we attach a postprocessor which will "reform" a phrase structure yielding ambiguous output.Treetree transformation rules are utilized here.Current result of our machine cransLacion system is shown in Appendix.Translation of a Sample Text. }((h¢.*ne: ,, a %~qem (or IOcai communlcat,on among computing statiOns Our experlmcn[ai E.thcrnc; u~;.: ~ppcc coaxial eabl~ Io c~rn ~urlaoie-len~th dlgltal data packets among, for example, pcrsonai minicomputers, pr~nung f'aciliues, iar~¢ ~ie s~orage de,.~ces, magnetic r~pe backup stauons.lar~er cenlra! computers, and longer-haul communlcauor~ equzpment.The ,~hared communicauon facilit.~, a branchm8 E~er. ~s passive. A sIauons E~heme~ interface connecL~ b,-sonalb through an interface cabie to a Lranscezver which in turn ~ps mLo the passing F/her 4 packet is hmadcas{ onto the F:'ther. is heard b.~ all smr/ons, and is cop~ed from the Er.her b.~ desunauons ~.hich soiL'c: ~! accorain~ to the packe:s leadm8 address bits. This ,s 0madc.~l packe: s~tching alld shouic be disunguzshec~ from s(ore-and-t'or~ard packe( switchin 8 m wh,ch muun9 ~ nerformed h~ mtermedmte pruccssm~ elements. To handle {he demand~ of ~,rowth. an F/heine! can be ex~ended usm@ packet repeaters (or signaJ regeneration, packe{ filters t'or crar~c locaJzzauon, an(~ p~ket gate~a.vs /'or intcmetwurk address extension.Control is completeb dnstrioutea among stauons with packet transmissions coordinated ',nmugh
.Accordingly, a large amount of transformations in various levels are required in order to obtain high quality translation. The goal of this research is to provide a good framework for carrying out those operations systematically. The solution depends on the design of intermediate representation (IR). Basic requirements to intermediate representation design are listed below. a) Accuracy: IR should retain logical conclusion of natural language expression.The following distinctions, for example, should be made in IR level: it is often the case that a given English word must be translated into different Japanese words or phrases if it has more than one word meanings.But it is not reasonable to capture this problem solely as a problem of word meaning disambiguation in analysis phase; the needed depth of disamb£iuation depends on target language.So it is also handled in transfer phase.In general, meaning of • given word is recognized based on the relation to other constituents in the sentence or text vhicb is semantically related to the given word. To make this poaslble in transfer phase, IR must provide a link to semantically related constituents of a given item.For example, an object of a verb should be accessible in IR level from the verb, even if the relation is implicit ~n the surface structure (as., passives, relative claus=a, and their combinations, etc.) ¢) Prediction of control:given an IR expression, the model should be able to predict explicitIy what operations are co be done in what order.some sort of transformation rules ere word specific.The IR interpretation system should be designed Co deal with those word specific rules easily. e) Computability:All processing= should be effectively computable.Any IR is useless if it is not computable.
null
Main paper: principle of tp, anslation: This section outlines our solution Co the requirements posed in the preceding section.We employ MonCague Gram=mr (HonCague 1974, Dowry 1981) as a theoretical basis of translation model. Inter~edlate representation is designed based on intensional logic.Intermediate representation for a given natural language expression is obtained by what we call functional analysis.In functional analysis, input sentence is decomposed into groups of constituents and interrelationship among those groups are analyzed in terms of function-argument relationships. Suppose a sentence:EQUATIONThe functional analysis makes following two points:a) (L) is decomposed as:"I have a book" ÷ "nOt".(2) b) In the decomposition (2), "not" is an operator or function co "I have a book."The result of this analysis can be depicted as follows:~ ""I have a book" I (3)wherel >denotes a function and[ Idenotes en argument. The role of "not" as a function is:"not" as a semantic operstor: it negates a given proposition; "not" is a syntactic operator:it inserts an appropriate auxiliary verb and = lexical item "not" into appropriate position of its argument.This kind of analysis goes on further with embedded sentence until it is decomposed into lexical units or even morphemes.Montague Grammar (MG) gives a basis of functlonel analysis.One of the advantages of MG consists in its interpretation system of function form (or intensional logical form).In MG, interpretation of an intenelonal logical formula is a mapping I from incenaional logical formulas to set theoretical domain.Important property is chat this ampping I is defined under the cons-trainC of compositlonality, that is, I satisfies: A For the sake of property (5), ~he interpretation of (6) is done as a data flow computation process as followa:EQUATIONA ~I[A] , | A "I Its c O } ~7)By this property, we can easily grasp the processing stream.In particular, we can easily ~hooc trouble and source of abnormality when debugging a system. Due to the above property and others, Ln particular due to its rigorous framework based .)n Logic, MG has been studied in ~nformation science field (Hobbs 1978 , Friedman |978, Yonezaki [980, Nishida 1980 , Landsbergen 1980 , Moran 1982 , Moore 1981 , Rosenschein 1982 .Application of MG to machine translation was also attempted (Hauenschild 1979 , Landsbergen 1982 , but those systems have only partially utilized the power of MG. Our approach attempts to utilize the full power of MGoIn order to obtain the syntactic structure in Japanese from an intensional logical form, in the same way as interpretation process of MC, we change the semantic domain from set theoretical domain to conceptual domain for Japanese.Each conceptual unit contains its syntactic expression in Japanese.Syntactic aspect is stressed for generating syntactic structure in Japanese.Conceptual information is utilized for semantic based word choice end paraphrasing.For example, the following function in Japanese syntactic domain is assigned to • logical item "not":EQUATION3.1 Definition of Formal Tools e) English oriented Formal Representation (EFR) is a version of intensional logic, and gives a rigorous formalism for describing the results of functional analysis.It is based on Cresswell's lambda deep structure (Cresawell 1973) . Each expression has a uniquely defined type. Lambda form is employed to denote function itself. b) Conceptual Phrase Structure (CPS) is a data structure in which syntactic and semantic information of a Japanese lexicel unit or phrase structure are packed.i) example of CPS for a lexical item:EQUATIONcategory; lexical item; conceptual info.; "EIGO" means English" language.ii) example of CPS for phrase structure:[NP [ADJ "AKAI" with ... ] [NOUN "RINGO" with ... ] with ... ] (i0)Transfer-generation process for the sentence (1) looks like:"I don't have a book" ~',,I have a book" I // • TRANSFER / (LAMBDA (x) {SENTENCE x [AUX "NAI"]}) TRANS FE R, GENE RAT I ON S WATASHI-WA HON-WO MOTSU ,,-..._./ S S AUX; "AKAI" means red, and "RINGO" means apple. c) CPS Form (CPSF) is a form which denotes operation or function on CPS domain.It is used to give descriptions to mappings from EFR to CPS.i) Constants: CPS. ii) Variables:x, y, ... . (indicated by lower case strings).iii) Variables with constraints: e.g., (! SENTENCE x).; variable x which must be of category SENTENCE. .. CPSF ..• . CPS ..Prefix notation is used for CPSF, described using Formal Tools. / and syntactic aspect is emphasized.stage 3 (generation): evaluates the CPSF to get CPS; generation of surface structure from CPS is straightforward.In order to give readers an overall perspective, we illustrate an example in Fig.2 . Note that the example illustrated includes partial negation.Thus operator "not" is given a wider scope than "always".In the remaining part of this section we will describe how to extract EFR expression from a given sentence. Then we will discuss the problem which arises in evaluating CPSF, and give its possible solution. extracting efr expression from input sentence: Rules for translating English into EFR form in .~ssociated with each phrase structure rules.For example, the rule looks llke:NP -> DET+NOUN where <NP>-<DET>(<NOUN>) (ii)where, <NP> stands for an EFR form assigned tu ~he NP node, etc. Rule (II) says chat EFR for an NP is a form whose function section is EFR for a DET node and whose argument section is EFR for a NOUN node. This rule can be incorporated into conventional natural language parser.Evaluation process of CPSF is a sequence of lambda conversions and tree ~ransformations. Evaluation of CPSF is done by a LISP ~ncerpreter-l i ke al gori t hm. A pr obl em whi ch we cal l hi gher order problem arose in designing the evaluation algorithm.By higher order property we mean that there exist functions which take other functions as arguments (Henderson 1980) . CPSF in fact has this property.For example, an adjective "large" is modeled as a function which takes a noun as its argument.For example, large(database), "large database"On the other hand, adverbs are modeled as functions to adjectives, For example, very(large), extremely(large), comparatively(large), etc.The difficulty with higher order functions consists in modifiction to function. For explanation, let our temporal goal be regeneration of English from EFR.Suppose we assign to "large" a lambde form like:EQUATIONwhich takes a noun and returns a complex noun by attaching an adjective "large". If the adjective is modified by an adverb, say "very", we have to modify (14); we have to transform (14) into a lambda form like:EQUATIONwhich attaches a complex adjective "very large" to a given noun. As is easily expected, it is too tedious or even impossible to do this task in general. Accordingly, we take an alternative assignment instead of (14), namely:EQUATIONSince this decision cuases a form:EQUATIONto be created in the course of evaluation, we specify what to do in such case. The rule is defiend as follows: EQUATIONEQUATION; which may read: is y:[there is a uniquely specified object y referred to by an NP "the table", such that y is a block which is restricted to be located on x.]This lambda form is too complicated for tree transformation procedure to manipulate.So it should be transformed into equivalent CPS if it exists.The type of the lambda form is known from the context, namely one-place predicate. So if we apply the lambda form (20) to "known" entity, say "it", we can obtain sentence structure like: EQUATIONThe extraction rule can be written as a pattern matching rule like:EQUATIONThis rule is called an application rule.In general, evaluation of [ambda form itself results in a function value (function as a value).This causes difficulty as mentioned above. Unfortunately, we can't dispense with lambda forms; lambda variables are needed to link gap and its antecedent in relative clause, verb and its dependants (subject, object, etc), preposition and its object, etc. For example, in our model, an complex noun modified by a PP: "block on the table"£s assigned a following EFR:Of course, this way of processing is not desirable; it introduces extra complexity. But this is a trade off of employing formal semantics; the same sort of processing is also done rather opaque procedures in conventional MT system. modeling translation process: This section illustrates how English-Japanese translation process is modeled using formal tools.Firstly, how several basic linguistic constructions are treated is described and then mechanism for word choice is presented.a) Sentence: sentence consists of an NP and a VF. VP is analyzed as a one-place predicate, which constructs a proposition out of an individual referred Co by the subject.VP is further decomposed into intransitive verb or cranaltive verb + object.Intransitive verbs and transitive verbs ere analyzed as one-place predicates and two-place predicate, respectively.One-place predicate and two-place predicate are assigned a CFSF function which generates a sentence ouc of an individual and chat which generates a sentence out of a pair of individuals, respectively.Thus, a transitive verb "constructs" is assigned a CPSF form:EQUATION; given two individuals, this function attaches co each argument a case marker (corresponding to JOSHI or Japanese postfix) and then generates a sentence structure.The assignment (24) may be extended later to incorporate word choice mechanism.Treatment of NP in MonCague-besed semantics is significant in chat EFR expression for an NP is given a wider scope then Chat for a VP. Thus the EFR form for an ~P-VP construction looks llke:EQUATIONwhere <x> means EFR form for x, x=NP,... .English quantifier which is syntactically local but semantically global.For example, first order logical form for a sentence:"this command needs no operand" (267 looks Like:EQUATIONwhere operator "not", which comes from a determiner "no", is given a wider scope than "needs". This translation is straightforward in our model; the following EFR is extracted from (26):EQUATION[f we make appropriate assignment including:EQUATIONwe can get (27) from (28).In Engllsh-Japanese -,-'chine translation, this treatment gives an elegant solution to the :ranalation of prenominal negation, partial negation, etc.Since Japanese language does not have a synCactlc device for prenominal negation, "no" must be translated into asainly two separate constituents: one is a RENTAISHI (Japanese decerminer) and another is an auxiliary verb of negation.One possible assignment of CFSF looks like: ...)).EQUATIONBy <MOD£FIER> we mean modification to noun by adjectives, prepositional phrases, infinitives, present/past particles, etc. The translation process is determined by a CPSF assigned co <DET>, En cases of "the" or "a/an", translation process is abic complicated. Et is almost the same as the process described in detail in section 3: firstly the <MODIFIER>s and <NOUN> are applied Co an individual like "the chinE" (the) or "some-chinE" (a/an) and a sentence will be obtained; then a noun structure is extracted and appropriate RENTAISHI or Japanese determiner is attached. c) Other cases: some ocher cases are illustrated by examples in Fig.3 .• In order to obtain high quality translation, word choice .~chanism must be incorporated at least for handling the cases like: i) subordinate clause:"When SI, S2" & (when (<SI >) ) (<$2>) "TOKI" [$I] [[SI] "TOKI 's] [$2] [[Sl] "TOKI" [S2]]2) tense, aspect, modal: "I bought a car" did(<I buy a car>) "TA" "WATASHI-WA JIDOUSHA-WO KAU" "WATASHI-WA JIDOUSHA-WO KAU TA" ; indirect question is generated first, then it is transformed into a sentence. Construction. <x>, {x}, [x] and "x" stand for EFR for x, CPSF for x, CPS for x, and CPB for Japanese string x, respectively. verb in accordance with its object or its agent, adjective-noun, adverb-verb, and preposition.Word choice is partially solved in the analysis phase as a word meaning disambiguation.So the design problem [s to determine to what degree word sense is disamblguated in the analysis phase and what kind of ambiguities is left until transfer-generation phase.Suppose we are to translate a given preposition.The occurence of a preposition [s classified as:(a) when it is governed by verbs or nouns:(a-l) when governmant is strong: e.g., study on, belong to, provide for; (a-2) when govern.ment is weak: e.g., buy ... at store; (b) otherwise:(b-I) idiomatic: e.g., in particular, in addition; (b-2) related to its object: e.g., by bus, with high probability, without÷ING.We treat (a) and (b-l) as an analysis problem and handle them in the analysis phase. (b-2) is more difficult and is treated in the transfergeneration phase where partial semantic interpretation [s done.Word choice in transfer-generatlon phase is done by using, conditional expression and attributive information included in CPS. For example, a transitive verb "develop" is translated differently according to its object: develop ~ (* system) ... KAINATSU-SURU t (+ film) GENZOU-SURU.The following assignment of CPSF makes this choice poss ib le :deve lop <= (LAMBDA (x y) [(CLASS y)=SYSTEM -> ("x-GA y-WO KAIHATSU-SURU"} ; (CLASS y)-FILM ->("x-GA y-WO GENZOU-SURU"};EQUATIONoperating-syStem <-[NOUN "OS" with CLASS-system; ... ], (36) film <-[NOUN "FUILUMU" with CLASS-film; ... 1.To make this type of processing possible in the cases where the deep object is moved from surface In EFR level, lambda variable x is explicitly used as a place holder for the gap.A functor "which" dominates both the EFR for the embedded sentence and that for the head noun. A CPSF assigned to the functor "which" sends conceptual information of the head noun to the gap as follows: firstly it creates a null NF out of the head noun, then the null NP is substituted into the lambda variable for the gap.In word choice or semantic based translation in general, various kinds of transformations are carried out on target language structure. For example,her arrival makes him happy, We have constructed a prototype system.It is slmplified then practical system in:-it has only limited vocabulary, Sample texts are taken from real computer manuals or abstracts of computer journals. Initially, four sample texts (40 sentences) are chosen. Currently it is extended to I0 texts (72 sentences).Additional features are introduced Ln order to make the system more practical. a) Parser: declarative rules are inefficient for dealing with sentences in real cexts. The parser uses production type rules each of which is classified according to its invocation condition.Declarative rules are manually converted into this rule type. b) Automatic postedicor: transfer process defined so far concentrates on local processings. Even if certain kinds of ambiguities are resolved in this phase, there still remains a possibility that new ambiguity is introduced in generation phase. Instead of incorporating into the transfer-generation phase a sophisticated mechanism for filtering out ambiguities, we attach a postprocessor which will "reform" a phrase structure yielding ambiguous output.Treetree transformation rules are utilized here.Current result of our machine cransLacion system is shown in Appendix.Translation of a Sample Text. }((h¢.*ne: ,, a %~qem (or IOcai communlcat,on among computing statiOns Our experlmcn[ai E.thcrnc; u~;.: ~ppcc coaxial eabl~ Io c~rn ~urlaoie-len~th dlgltal data packets among, for example, pcrsonai minicomputers, pr~nung f'aciliues, iar~¢ ~ie s~orage de,.~ces, magnetic r~pe backup stauons.lar~er cenlra! computers, and longer-haul communlcauor~ equzpment.The ,~hared communicauon facilit.~, a branchm8 E~er. ~s passive. A sIauons E~heme~ interface connecL~ b,-sonalb through an interface cabie to a Lranscezver which in turn ~ps mLo the passing F/her 4 packet is hmadcas{ onto the F:'ther. is heard b.~ all smr/ons, and is cop~ed from the Er.her b.~ desunauons ~.hich soiL'c: ~! accorain~ to the packe:s leadm8 address bits. This ,s 0madc.~l packe: s~tching alld shouic be disunguzshec~ from s(ore-and-t'or~ard packe( switchin 8 m wh,ch muun9 ~ nerformed h~ mtermedmte pruccssm~ elements. To handle {he demand~ of ~,rowth. an F/heine! can be ex~ended usm@ packet repeaters (or signaJ regeneration, packe{ filters t'or crar~c locaJzzauon, an(~ p~ket gate~a.vs /'or intcmetwurk address extension.Control is completeb dnstrioutea among stauons with packet transmissions coordinated ',nmugh : .Accordingly, a large amount of transformations in various levels are required in order to obtain high quality translation. The goal of this research is to provide a good framework for carrying out those operations systematically. The solution depends on the design of intermediate representation (IR). Basic requirements to intermediate representation design are listed below. a) Accuracy: IR should retain logical conclusion of natural language expression.The following distinctions, for example, should be made in IR level: it is often the case that a given English word must be translated into different Japanese words or phrases if it has more than one word meanings.But it is not reasonable to capture this problem solely as a problem of word meaning disambiguation in analysis phase; the needed depth of disamb£iuation depends on target language.So it is also handled in transfer phase.In general, meaning of • given word is recognized based on the relation to other constituents in the sentence or text vhicb is semantically related to the given word. To make this poaslble in transfer phase, IR must provide a link to semantically related constituents of a given item.For example, an object of a verb should be accessible in IR level from the verb, even if the relation is implicit ~n the surface structure (as., passives, relative claus=a, and their combinations, etc.) ¢) Prediction of control:given an IR expression, the model should be able to predict explicitIy what operations are co be done in what order.some sort of transformation rules ere word specific.The IR interpretation system should be designed Co deal with those word specific rules easily. e) Computability:All processing= should be effectively computable.Any IR is useless if it is not computable. Appendix:
null
null
null
null
{ "paperhash": [ "landsbergen|machine_translation_based_on_logically_isomorphic_montague_grammars", "rosenschein|translating_english_into_logical_form", "moran|the_representation_of_inconsistent_information_in_a_dynamic_model-theoretic_semantics", "moore|problems_in_logical_form", "nishida|hierarchical_meaning_representation_and_analysis_of_natural_language_documents", "landsbergen|adaptation_of_montague_grammar_to_the_requirements_of_question-answering", "yonezaki|database_system_based_on_intensional_logic" ], "title": [ "Machine Translation Based on Logically Isomorphic Montague Grammars", "Translating English Into Logical Form", "The Representation of Inconsistent Information in a Dynamic Model-Theoretic Semantics", "Problems in Logical Form", "Hierarchical Meaning Representation and Analysis of Natural Language Documents", "Adaptation of Montague Grammar to the Requirements of Question-Answering", "Database System Based on Intensional Logic" ], "abstract": [ "The paper describes a new approach to machine translation, based on Montague grammar, and an experimental translation system, Rosetta, designed according to this approach. It is a multi-lingual system which uses 'logical derivation trees' as intermediate expressions.", "A scheme for syntax-directed translation that mirrors compositional model-theoretic semantics is discussed. The scheme is the basis for an English translation system called PATR and was used to specify a semantically interesting fragment of English, including such constructs as tense, aspect, modals, and various lexically controlled verb complement structures. PATR was embedded in a question-answering system that replied appropriately to questions requiring the computation of logical entailments.", "Model-theoretic semantics provides a computationally attractive means of representing the semantics of natural language. However, the models used in this formalism are static and are usually infinite. Dynamic models are incomplete models that include only the information needed for an application and to which information can be added. Dynamic models are basically approximations of larger conventional models, but differ is several interesting ways.The difference discussed here is the possibility of inconsistent information being included in the model. If a computation causes the model to expand, the result of that computation may be different than the result of performing that same computation with respect to the newly expanded model (i. e. the result is inconsistent with the information currently in the dynamic model). Mechanisms are introduced to eliminate these local (temporary) inconsistencies, but the most natural mechanism can introduce permanent inconsistencies in the information contained in the dynamic model. These inconsistencies are similar to those that people have in their knowledge and beliefs. The mechanism presented is shown to be related to both the intensional isomorphism and impossible worlds approaches to this problem.", "Abstract : Most current theories of natural-language processing propose that the assimilation of an utterance involves producing an expression or structure that in some sense represents the literal meaning of the utterance. It is often maintained that understanding what an utterance literally means consists in being able to recover such a representation. In philosophy and linguistics this sort of representation is usually said to display the \"logical form\" of an utterance. This paper surveys some of the key problems that arise in defining a system of representation for the logical forms of English sentences and suggests possible approaches to their solution. The author first looks at some general issues relating to the notion of logical form, explaining why it makes sense to define such a notion only for sentences in context, not in isolation, and then discusses the relationship between research on logical form and work on knowledge representation in artificial intelligence. The rest of the paper is devoted to examining specific problems in logical form. These include the following: quantifiers; events, actions and processes; time and space; collective entities and substances; propositional attitudes and modalities; and questions and imperatives.", "This paper attempts to systematize natural language analysis process by (1) use of a partitioned semantic network formalism as the meaning representation and (2) stepwise translation based on Montague Grammar. The meaning representation is obtained in two steps. The first step translates natural language into logical expression. The second step interprets logical expression to generate network structure. We have implemented set of programs which performs the stepwise translation. Experiments are in progress for machine translation and question answering.", "In this paper a new version of Montague Grammar (MG) is developed, which is suitable for application in question-answering systems. The general framework for the definition of syntax and semantics described in Montague's 'Universal Grammar' is taken as starting-point. This framework provides an elegant way of defining an interpretation for a natural language (NL): by means of a syntax-directed translation into a logical language for which an interpretation is defined directly.In the question-answering system PHLIQA 1 [1] NL questions are interpreted by translating them into a logical language, the Data Base Language, for which an interpretation is defined by the data base. The similarity of this setup with the Montague framework is obvious. At first sight a QA system like this can be viewed as an application of MG. However, a closer look reveals that for this application MG has to be adapted in two ways.", "Model theoretic semantics of database systems is studied. As Rechard Montague has done in his work,5 we translate statements of DDL and DML into intensional logic and the latter is interpreted with reference to a suitable model. Major advantages of its approach include (i) it leads itself to the design of database systems which can handle historical data, (ii) it provides with a formal description of database semantics." ], "authors": [ { "name": [ "J. Landsbergen" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Rosenschein", "Stuart M. Shieber" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Moran" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Robert C. Moore" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "T. Nishida", "S. Doshita" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Landsbergen" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "N. Yonezaki", "H. Enomoto" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null ], "s2_corpus_id": [ "8292665", "9564084", "1360818", "18655604", "6932851", "30559766", "2744607" ], "intents": [ [], [], [ "methodology" ], [ "methodology" ], [], [ "methodology" ], [] ], "isInfluential": [ false, false, false, false, false, false, false ] }
Problem: English-Japanese machine translation requires a large amount of structural transformations at both grammatical and conceptual levels. Solution: This paper proposes a model based on Montague Grammar to make the control structure clearer and more understandable, with the translation process modeled as a data flow computation process.
504
0.027778
null
null
null
null
null
null
null
null
ef44855256d8a83faa2cc3d02e06363f38da661a
12640236
null
Specialized Information Extraction: Automatic Chemical Reaction Coding From {E}nglish Descriptions
In an age of increased attention to the problems of database o rganlzat ion, retrieval problems and query languages, one of the major economic problems of many potential databases remains the entry of the original information into the database. Specialized information extraction (SIE) systems are therefore of potential importance in the entry of information that is already available in certain restricted types of natural language text. This paper contains a discussion of the problems of enElneering such systems and a description of a particular SIE system, designed to extract information regarding chemical reactions from experimental sections of papers in the chemical literature and to produce a data structure containing the relevant information.
{ "name": [ "Reeker, Larry H. and", "Chmora, Elena M. and", "Blower, Paul E." ], "affiliation": [ null, null, null ] }
null
null
First Conference on Applied Natural Language Processing
1983-02-01
12
7
null
In an age of increased attention to the problems of database organization retrieval problems and query languages, one of the major economic problems of many potential databases remains the entry of the original information into the database.A large amount of such information is currently available in natural language text, and some of that text is of a higi~ly stylized nature, with a restricted semantic domain.It is the task of specialized information extract ion (SIE) systems to obtain information automatically from such texts and place it in the database. As with any system, it is desirable to minimize errors and human intervention, bur a total absence of either is not necessary for the system to be economically viabl~.In ~his paper, we will first discuss some general characterlstics of SIE systems, then describe the development of an experimental system to assist in the coastructlon of a database of chemical reaction information.Many Journals, such as the Journal of Or~anlc Chemistry, have separate experimental sections, in which ~he procedures for preparing chemical compounds are described.It is desired to extract certain information about these reactions and place it in the database.A reaction information form (RIF) was developed in another project to contain the desired information.The purpose of the system is tO eliminate the necessity in a majority of cases, for a trained reader to read the text and enter the P.IF information into the machine.In the discussion below, we shall use the term ~rammmr to mean a system conslstin 8 of a lexicon s a s~ntax, a meanin~ representation language, and a z~ntlc mapping. The lexicon consists of the ~st of words in the language and one or more grammatical categories for each word. The syntax specifies the structure of sentences in the language in terms of the grammar ical categories.Morphological procedures may specify a "syntax" within classes of words and thereby reduce the size of the lexicon.A discourse structure, or extrasentential syntax, may also be included.The semantic mapping provides ~oc each syntactically correct sentence a meaning representation in the meaning representation language, and it is the crux of the whole system. ~f the semantic mapping is fundamentally straightforward, then the syntactic processing can often be reduced, as well. This is one of the virtues o~ SIt systems:Because Of the specialized subject matter, one can simplify syntactic process[n~ through the use of ad hoc procedures (either algorithmic or heuristic). In many ca~es, t:he knowledge that allows this is nonlinguist ic knowledge, which may be encoded tn frames. Although this is not always the sense in which "frame" is used, this is the sense in which we shall use the term in our discussion below: Frames encode nonltngulstic "expectations" brouRht to bear on the task. ~n this light, it is interesting to ~xplore the subject of case-slot Identity, as raised by Charnlak ([981) .if the slots are components of framesmand cases are names for arguments of a predicate, then the slots in any practical language understanding system may not correspond exactly co the cases in a language.[n fact. the predicates may not correspond to the frames.On the other hand, if the language is capable of expressing all of the dlstinctio~m that can be understood in terms of the frames, one would expect them to grow closer and closer as the system became less specialized.The decision as to whether to maintain the distinction between predlcat e/case and frame/slot has a "Whor flan" flavor to it. We have chosen to maintain that dis c Inct ion.Despite the general decision with regards to predicates and slots, some of the grammatical categories in our work do not correspond precisely to conventional grammatical categories, but are specialized for the reaction information project. An example is "chemical name", This illustrates another reason that SIE systems are more practical than more general language understanding systems: One can use certain ad hoc categories based upon the characteristics of the problem (and of the underlying meanings represented). This idea was advocated several years ago by Thompson ([966) and used in the design of a specialized database query system ( DFACON).Its problem in more general language processing appllcac ions -that the categories may not extend readily from one domain to another and may actually complicate the general grammar -does not cause as much difficulty in the SIE case.The danger of using ad hoc categories is, of course, that one can lose extensibility, and must make careful decisions in advance as to how specialized the SIE system is going to be.The term "specialized information extraction" is necessarily a relative one. Information extraction can range from the simplest sorts of tasks like obtaining all names of people men-Cloned in newspaper articles, to a full understanding of relatively free text.The simplest of these require of the program little linguistic or empirical knowledge, while the most complex require more knowledge than we know how to lye.But when we refer to an SIE task, we will mean one The lessened syntactic variety in SIE tasks means that the amount of syntactic analysis needed is lessened, and also the complexity of the machinery for the semantic mapping.At the same time, the specialized semantic domain allows the use of empirical knowledge to increase the efficiency and effectiveness of analysis procedures (the lessening of ambiguity being only one aspect of this).of SIr that we have chosen are highly structured paragraphs, describing laboratory procedures for synthesizing organic substances which were taken from =he experimental section of articles in J. Or[. Chem.Our feeling is that the full text of chemical articles is beyond the state of the SIr art, if one wants to extract anything more than trivial information; hut the limited universe of discourse of the experimental paragraphs renders SIr on them fens i b le.Since the days of the early mechanical translation efforts, the amount of study of natural language phenomena, both from the point of view of pure theory and of determining specific facts about languages, has been substantial. Similarly, techniques for dealing with languages and other sorts of complex information by computer have been considerably extended and the work has been facilitated by the provision of higher-level programming languages and by the availability of faster machines and increased storage. Never-chelsea, the state of scie~eific knowledge of language and of processes for utilizing that knowledge is still such that it is necessary Co take an "engineering approach" to the design of comp~attoual linguistics systems.In using the term "engineering", we mean to indicate that comprouises have to be made in the design of the system between what is theoretically desirable, and what is feasible at the state of the art.Failing to have a complete grammar of the language over which one wishes to have STE, one uses heuristics to determine features that one wants.At the same time, one uses the scientific knowledge available, insofar as that is feasible. One builds and tests model or pilot system to explore problems and techniques and tries co extrapolate the experience to production systems, which themselves are likely ~o have to be "incrementally developed".In any engineering rout ext, evaluation measures are important.These measures allow one Co set criteria for acceptability of designs which are likely always to be imperfect, and to compare alternative systemS.The ultimate evaluation ~easure on which management decisions rest is usually cosc/benefi~ ratio. This can be decermined only after examining the h,~an alternatives and their effectiveness.It is important to be able Co quantify these alternatives, and this is often not done. For instance, it is common to assume chat an automaclc system should not produce errors, whereas humans always do; so the percentage of errors should be determined experimentally in each case and compared.For the evaluation of SIE systems, we would like to propose three measures;(1)Robustness -the percentage of inputs handled.Most real SIE syscm will reject certain inputs, so the rob~tness ~rlll be one minus the parc4n~ags rejected.(2) Accuracy the percentage of those inputs handled which are correctly handled.( 3)Error rate -the percentage of erronao~ entries within incorrectly an handled input.Probably the most difficult aspect of SIE enginearinE is the provision of a safety factoran ability of the system to recognize inputs that it cannot handle.It is clear that one can create a system that is robust and acceptably accurate which has unacceptable error rates for certain inputs.If the system is to be useful, it must be possible aueOmaeicall~ to determine which documents contain unacceptable error rates. It does no good to determine this manually, since chac ~ould nan assentially redoing all of the infor-: scion extraction m~nual 1 y, and the space of 'doc, ments Is not sufficiently uniform or continuous thaC sampling methods would do any good. It appears, then, that the only way that one is going to be able co provide a safety factor is to have a system chat understands enough about the linguistic and nonllngulstlc aspects of the texts to know when it is not understanding (at least most of the time).We shall have more co say about the safety factor when we discuss our system below.for "int ell Igenc" systems is thac they be given some provision for improving their performance by "learning".Generally the problem with chls suggestion is chat the complexity of the learning process is greater than chac of the original system, and it is also unclear in many cases what the machine needs to learn.It nevertheless seems feasible for SIE systems to learn by Interaction with people who are dolng information extraction tasks. The simplest case of this would be .~u 8-mentinK the lexicon, but ochers should be possible. The first step in chls process would be co III build in a sufficient safety factor that most incorrectly handled doc~anents can be explicitly rejected.The second would be Co localize the factors that caused the rejection sufficiently to be able to ask for help from the person doing the manual extraction process.Although we have considered this aspect of SIE development, we have oct made any attempt to implement it.A particular task that would appear to be a candidate for STY, under the criteria given above, ls the extraction of information on chemical reactions from experimental sections of chemical Journals.The Journal chosen for our experimental work was the Journal of Or~anlc Chemistry,.Two examples of reaction descriptions from this Journal are shown in Figure 1 . Both of these examples have a particular type of discourse structure, which we have called the "simple model".The paragraphs in the figure (hut not in the actual texts) are divided into four components: a heading, a synthesis, a work-upm and a characterization.Usually, the heading names the substance that is produced in the reaction, the synthesis porclon describes the steps followed in conducting the reaction, the work-up poL~lon describes the recovery of the substance from the reaction mixture, and the characterization portion presents analytical data suppoL~Ing the structure assignment.Most of the information that we wish to obtain Is in the synthesis port l on, which describes the chemical reactants, reaction condltlons and apparatus. Figure 2 shows the Reaction Information Form (R/F) designed to hold the required reaction information, with information supplied for the two paragraphs illustrated in Figure 1 . One point to notice is that not every piece of data is contained in every reaction description.Ln both examples, corresponding to [nformar~.~,~ l~Ct u~speclfled in the corresponding r'~.rt~.~* des~:rlptions (those shown in Figure L ).The chemical reaction SIE is written in PL/I and runs on a 370/168 under TSO.The t~stlng of certain of the algorithms and heuristics has been done using SNOBOLA (SPITBOL) running under UNIX on a POP LI/70.The choice of PL/I on the 370 was dictated by practical considerations involving the availability of textual m~Cerl~l, the unusual format of that material, and the availability of existing PL/I routines to deal with that format,The prosraum comprising each stage of the system are implemented modularly.Thus the lexical stage involves separate passes for individual lexical categories.In some cases, these are not order-lndependent.In the syntactic phase, the individual modules are "word experts", and in the last (extraction) phase, they are individual "frames" or components of frames,
null
null
In the lexical stage, both dictionary lookup and morphological analysis are used to classify words.Morphological analysis procedures include suf fix normalization, stemming and root word lookup and analysis of internal p,mc~itation. Chemical substances may be identified by complex words and phrases, and are therefore surprisingly difficult to isolate& Both lexical and syntactic means are used to isolate and tag chemical names.In the lexlcal stage, identifiable chemical roots, such as "benz" and terms, such as "Iso-" are tagged. In the syntactic stage, a procedure uses clues such as parenthetical expressions, internal commas and the occurrence of Juxtaposed chemical roots to identify chemical names. This is really morphology, of course.It also uses the overall syntax of the sentence to check whether a substance name is expected and to dellmlt the chemical name.Chemical substances which comprise the reactants and the products of a chemical reaction, as well as the reaction conditions and yield, are identified by a hierarchical application of procedures.The syntactic stage of the system has been implemented by application of word expert procedures to the data structures built durittg the lex~cal stage.The word experts are based upon the !~s of Rieger and Small (1979) but It has not h found to be necessary to ';~? the full complexity of their model, so this system's word expels have N-l-Mot kyl-|,6-dlkydM-! ,4:44,10b-dletkeaebonso(/1-I)Mkabudae.~t( 1H,4 H)<UearbatmJ~ (~a). ~.l (O, 21L3 (q). 2~l~ (qj, gO.3 to resemble a standard procedural implementation (Wlnograd, 1971 ) (based mostly on particular words or word categories, however). Their function is to determine the role of a word taking lexical and syntactic context into consideration.The word expeL'l: approach was initially chosen because it enables the implementation of fragments of a grammar and does not require the development of a comprehensive grammar.Since irrelevant portions can be identified by reliable heuristics and eliminated, this attribute is partlcularly useful in the SIE context. The procedures also allow the incorporation of heuristics for isolat Ing cer~aln items of interest.In this context, it might be maintained that the interface between the syntax and the semantic mapping is even less clean than in certain other systems. This is intentional. BecauSe of the specialized nature of the process, we have implemented the "semantic counterpar~ of syntax" concept, as advocated by Thompson (1966) , where we judged that it would not impair the generality of the system within the area of reaction descriptions.We have tried not to make decisions that would make it difficult to extend the system to descriptions of reactions that do not obey the "simple model".The advantages of this approach were discused in Section I.The system pays particular attention to verb arguments, which are generally marked by prepositions This "case" type analysis gives pretty good direct clues to the function of items within the meaning representation. Sentencu structure is relatively regular, though extraposed phrases and a few types of clauses must be dealt with.Fortunately, the results, in terms of function of chemicals and reaction conditions, are the same whether the verb form is in an embedded clause or the ~ain verb of the sentence.Zn other words, we do not have to deal with the nuances implied by higher predicates, or with implicative verbs, presuppositions, and the llke.The semantic mapping could be directly to the components of the reaction information form, and that is the approach that was implemented in the first programs. This gave reasonable results in some test cases, but appeared co be less extenslble to other models of reaction description than IL4 was desirable.A SNOBOL4 version maps the syntax to a predicate-arg,.ment formalism, with a case frame for each verb designating the posslbte ~rguments for each predicate.The Extraction Sta~eThe meaning representation gives a pretty clear indication of the function of items within the RIF in the simple model. Since we wadted to experiment wlth generality in this system, we wished to separate general knowledge from linguistic knowledge, and for that reason, the actual extraction of items is done using the frame technique (Minaky, 1975; Charniak, 1975) .In the literature, frames and similar devices vary both in their format and in their function.Tn some cases, the information that they encode is still linguistic, at least in part. We are using them in the "nonllngulstlc" sense, as discussed in Section I.~n our system, frames encode the expectations that a trained reader would brin E to the task of extracting information from synthetic descriptions, involving the usual structure of these descriptions.A frame is being developed initially for the simple model. This frame looks for the synthesis section, dlsc~ rd ing work-up and charac- the synthesis, whe -~' subframes correspond to the particular entrle~ :~eeded in the RIF.As one example, the "time" frame expects to find a series of re~=tlon step times in the description.These are already labelled "time", and the frame will know that it has to total them. making approximations of such time expressions as "overnight" and indicating that the total tS then approximate.Another example is the "temperature" frame, which expects a series of temperatures, and must calculate the minimum and maximum, in order to specify a range. Again, a certain amount of specialized knowledge, such as the temperature indicated by an ice water bat~, is necessary.As of the date of this paper, we have only experimented with the version of the system that maps directly from the syntax into componu.: ~ the reaction coding form.As noted above, this version does not have the generality that we desire, but gives a pretty good indication of the capabilities of the system, as now Implemented.Am a test of the system, we ran it on fifty synthetic paragraphs from the experimental sections of the Journal of 0rsanic Ch,~stry, and thirty-six were processed satisfactorily. Four had clear, detectable problems, so the robustness was 92%, but the accuracy was only 78%, since ten of the paragraphs did not follow the simple model, and were nevertheless processed.Since these were full of errors, we did not try to compute a figure for average error rate.Although the objective of building this experimental system was only to deal with the simple model, the exercise has made clear to us the importance of the safety factor in making a system such as thls useful. We intend to continua work with the present system only for a few weekS, meanwhile considering the problems and promises of extending it. fall within chi.~ paradigm include one constructed by the Operating Systems Division of Logicon (Silva, Montgomery and Dwiggins, 1979) , which aims tO "atodel the cognitive activities of the htanan analyst as he reads and understands message text, distilling its contents Into information items of internst to him, and building a conceptual model of the lnformgtion conveyed by the meBsase," In the area of missile and satellite reports and aircraft activittu.Another project, at Rutgers University, Involva the analysis of case descriptions concerning glaucoma patients (Ci esi elski, 1979) , and the most extensive SIE project, also in the medical area, is that of the group headed by Naom£ Sager (1981) at New York University, and described in her book.The problem chat we have had concerning the safety factor is one chat is likely to be found in any $IE system, but i¢ is soluble we feel.Even though we have not completed work on this experimental system as of the time of writing this paper (we have found more syntactic and semantic procedures ro be implemented), we already have ideas as ¢o how to build in a better safety factor. Generally, these can be characterized as using some of the information chat can be gleaned by a comblnat ion of llnguls tic and chemical knowledge which we had ignored as redundant. While It is redundant in "successful" cases, it produces conflicts tn other cases, indicating that something is wrong, and that the document should be processed b? hand.If the safety Eactor can be improved, SIE systems offer a promising area of application of computational £ingulst tcs rechnl ques. Clear[?, nothing less than computational linguistics techniques show any hope of providing a reasonable safety factor -or ever adequare robustness and accuracy,The promise of the SIE area has been recognized by other researchers.Systems that ItS
null
Main paper: the lexical stage: In the lexical stage, both dictionary lookup and morphological analysis are used to classify words.Morphological analysis procedures include suf fix normalization, stemming and root word lookup and analysis of internal p,mc~itation. Chemical substances may be identified by complex words and phrases, and are therefore surprisingly difficult to isolate& Both lexical and syntactic means are used to isolate and tag chemical names.In the lexlcal stage, identifiable chemical roots, such as "benz" and terms, such as "Iso-" are tagged. In the syntactic stage, a procedure uses clues such as parenthetical expressions, internal commas and the occurrence of Juxtaposed chemical roots to identify chemical names. This is really morphology, of course.It also uses the overall syntax of the sentence to check whether a substance name is expected and to dellmlt the chemical name. the syntactic stase: Chemical substances which comprise the reactants and the products of a chemical reaction, as well as the reaction conditions and yield, are identified by a hierarchical application of procedures.The syntactic stage of the system has been implemented by application of word expert procedures to the data structures built durittg the lex~cal stage.The word experts are based upon the !~s of Rieger and Small (1979) but It has not h found to be necessary to ';~? the full complexity of their model, so this system's word expels have N-l-Mot kyl-|,6-dlkydM-! ,4:44,10b-dletkeaebonso(/1-I)Mkabudae.~t( 1H,4 H)<UearbatmJ~ (~a). ~.l (O, 21L3 (q). 2~l~ (qj, gO.3 to resemble a standard procedural implementation (Wlnograd, 1971 ) (based mostly on particular words or word categories, however). Their function is to determine the role of a word taking lexical and syntactic context into consideration.The word expeL'l: approach was initially chosen because it enables the implementation of fragments of a grammar and does not require the development of a comprehensive grammar.Since irrelevant portions can be identified by reliable heuristics and eliminated, this attribute is partlcularly useful in the SIE context. The procedures also allow the incorporation of heuristics for isolat Ing cer~aln items of interest.In this context, it might be maintained that the interface between the syntax and the semantic mapping is even less clean than in certain other systems. This is intentional. BecauSe of the specialized nature of the process, we have implemented the "semantic counterpar~ of syntax" concept, as advocated by Thompson (1966) , where we judged that it would not impair the generality of the system within the area of reaction descriptions.We have tried not to make decisions that would make it difficult to extend the system to descriptions of reactions that do not obey the "simple model".The advantages of this approach were discused in Section I.The system pays particular attention to verb arguments, which are generally marked by prepositions This "case" type analysis gives pretty good direct clues to the function of items within the meaning representation. Sentencu structure is relatively regular, though extraposed phrases and a few types of clauses must be dealt with.Fortunately, the results, in terms of function of chemicals and reaction conditions, are the same whether the verb form is in an embedded clause or the ~ain verb of the sentence.Zn other words, we do not have to deal with the nuances implied by higher predicates, or with implicative verbs, presuppositions, and the llke. the semantic sta~e: The semantic mapping could be directly to the components of the reaction information form, and that is the approach that was implemented in the first programs. This gave reasonable results in some test cases, but appeared co be less extenslble to other models of reaction description than IL4 was desirable.A SNOBOL4 version maps the syntax to a predicate-arg,.ment formalism, with a case frame for each verb designating the posslbte ~rguments for each predicate.The Extraction Sta~eThe meaning representation gives a pretty clear indication of the function of items within the RIF in the simple model. Since we wadted to experiment wlth generality in this system, we wished to separate general knowledge from linguistic knowledge, and for that reason, the actual extraction of items is done using the frame technique (Minaky, 1975; Charniak, 1975) .In the literature, frames and similar devices vary both in their format and in their function.Tn some cases, the information that they encode is still linguistic, at least in part. We are using them in the "nonllngulstlc" sense, as discussed in Section I.~n our system, frames encode the expectations that a trained reader would brin E to the task of extracting information from synthetic descriptions, involving the usual structure of these descriptions.A frame is being developed initially for the simple model. This frame looks for the synthesis section, dlsc~ rd ing work-up and charac- the synthesis, whe -~' subframes correspond to the particular entrle~ :~eeded in the RIF.As one example, the "time" frame expects to find a series of re~=tlon step times in the description.These are already labelled "time", and the frame will know that it has to total them. making approximations of such time expressions as "overnight" and indicating that the total tS then approximate.Another example is the "temperature" frame, which expects a series of temperatures, and must calculate the minimum and maximum, in order to specify a range. Again, a certain amount of specialized knowledge, such as the temperature indicated by an ice water bat~, is necessary.As of the date of this paper, we have only experimented with the version of the system that maps directly from the syntax into componu.: ~ the reaction coding form.As noted above, this version does not have the generality that we desire, but gives a pretty good indication of the capabilities of the system, as now Implemented.Am a test of the system, we ran it on fifty synthetic paragraphs from the experimental sections of the Journal of 0rsanic Ch,~stry, and thirty-six were processed satisfactorily. Four had clear, detectable problems, so the robustness was 92%, but the accuracy was only 78%, since ten of the paragraphs did not follow the simple model, and were nevertheless processed.Since these were full of errors, we did not try to compute a figure for average error rate.Although the objective of building this experimental system was only to deal with the simple model, the exercise has made clear to us the importance of the safety factor in making a system such as thls useful. We intend to continua work with the present system only for a few weekS, meanwhile considering the problems and promises of extending it. fall within chi.~ paradigm include one constructed by the Operating Systems Division of Logicon (Silva, Montgomery and Dwiggins, 1979) , which aims tO "atodel the cognitive activities of the htanan analyst as he reads and understands message text, distilling its contents Into information items of internst to him, and building a conceptual model of the lnformgtion conveyed by the meBsase," In the area of missile and satellite reports and aircraft activittu.Another project, at Rutgers University, Involva the analysis of case descriptions concerning glaucoma patients (Ci esi elski, 1979) , and the most extensive SIE project, also in the medical area, is that of the group headed by Naom£ Sager (1981) at New York University, and described in her book.The problem chat we have had concerning the safety factor is one chat is likely to be found in any $IE system, but i¢ is soluble we feel.Even though we have not completed work on this experimental system as of the time of writing this paper (we have found more syntactic and semantic procedures ro be implemented), we already have ideas as ¢o how to build in a better safety factor. Generally, these can be characterized as using some of the information chat can be gleaned by a comblnat ion of llnguls tic and chemical knowledge which we had ignored as redundant. While It is redundant in "successful" cases, it produces conflicts tn other cases, indicating that something is wrong, and that the document should be processed b? hand.If the safety Eactor can be improved, SIE systems offer a promising area of application of computational £ingulst tcs rechnl ques. Clear[?, nothing less than computational linguistics techniques show any hope of providing a reasonable safety factor -or ever adequare robustness and accuracy,The promise of the SIE area has been recognized by other researchers.Systems that ItS a. overview of the paper: In an age of increased attention to the problems of database organization retrieval problems and query languages, one of the major economic problems of many potential databases remains the entry of the original information into the database.A large amount of such information is currently available in natural language text, and some of that text is of a higi~ly stylized nature, with a restricted semantic domain.It is the task of specialized information extract ion (SIE) systems to obtain information automatically from such texts and place it in the database. As with any system, it is desirable to minimize errors and human intervention, bur a total absence of either is not necessary for the system to be economically viabl~.In ~his paper, we will first discuss some general characterlstics of SIE systems, then describe the development of an experimental system to assist in the coastructlon of a database of chemical reaction information.Many Journals, such as the Journal of Or~anlc Chemistry, have separate experimental sections, in which ~he procedures for preparing chemical compounds are described.It is desired to extract certain information about these reactions and place it in the database.A reaction information form (RIF) was developed in another project to contain the desired information.The purpose of the system is tO eliminate the necessity in a majority of cases, for a trained reader to read the text and enter the P.IF information into the machine.In the discussion below, we shall use the term ~rammmr to mean a system conslstin 8 of a lexicon s a s~ntax, a meanin~ representation language, and a z~ntlc mapping. The lexicon consists of the ~st of words in the language and one or more grammatical categories for each word. The syntax specifies the structure of sentences in the language in terms of the grammar ical categories.Morphological procedures may specify a "syntax" within classes of words and thereby reduce the size of the lexicon.A discourse structure, or extrasentential syntax, may also be included.The semantic mapping provides ~oc each syntactically correct sentence a meaning representation in the meaning representation language, and it is the crux of the whole system. ~f the semantic mapping is fundamentally straightforward, then the syntactic processing can often be reduced, as well. This is one of the virtues o~ SIt systems:Because Of the specialized subject matter, one can simplify syntactic process[n~ through the use of ad hoc procedures (either algorithmic or heuristic). In many ca~es, t:he knowledge that allows this is nonlinguist ic knowledge, which may be encoded tn frames. Although this is not always the sense in which "frame" is used, this is the sense in which we shall use the term in our discussion below: Frames encode nonltngulstic "expectations" brouRht to bear on the task. ~n this light, it is interesting to ~xplore the subject of case-slot Identity, as raised by Charnlak ([981) .if the slots are components of framesmand cases are names for arguments of a predicate, then the slots in any practical language understanding system may not correspond exactly co the cases in a language.[n fact. the predicates may not correspond to the frames.On the other hand, if the language is capable of expressing all of the dlstinctio~m that can be understood in terms of the frames, one would expect them to grow closer and closer as the system became less specialized.The decision as to whether to maintain the distinction between predlcat e/case and frame/slot has a "Whor flan" flavor to it. We have chosen to maintain that dis c Inct ion.Despite the general decision with regards to predicates and slots, some of the grammatical categories in our work do not correspond precisely to conventional grammatical categories, but are specialized for the reaction information project. An example is "chemical name", This illustrates another reason that SIE systems are more practical than more general language understanding systems: One can use certain ad hoc categories based upon the characteristics of the problem (and of the underlying meanings represented). This idea was advocated several years ago by Thompson ([966) and used in the design of a specialized database query system ( DFACON).Its problem in more general language processing appllcac ions -that the categories may not extend readily from one domain to another and may actually complicate the general grammar -does not cause as much difficulty in the SIE case.The danger of using ad hoc categories is, of course, that one can lose extensibility, and must make careful decisions in advance as to how specialized the SIE system is going to be.The term "specialized information extraction" is necessarily a relative one. Information extraction can range from the simplest sorts of tasks like obtaining all names of people men-Cloned in newspaper articles, to a full understanding of relatively free text.The simplest of these require of the program little linguistic or empirical knowledge, while the most complex require more knowledge than we know how to lye.But when we refer to an SIE task, we will mean one The lessened syntactic variety in SIE tasks means that the amount of syntactic analysis needed is lessened, and also the complexity of the machinery for the semantic mapping.At the same time, the specialized semantic domain allows the use of empirical knowledge to increase the efficiency and effectiveness of analysis procedures (the lessening of ambiguity being only one aspect of this).of SIr that we have chosen are highly structured paragraphs, describing laboratory procedures for synthesizing organic substances which were taken from =he experimental section of articles in J. Or[. Chem.Our feeling is that the full text of chemical articles is beyond the state of the SIr art, if one wants to extract anything more than trivial information; hut the limited universe of discourse of the experimental paragraphs renders SIr on them fens i b le.Since the days of the early mechanical translation efforts, the amount of study of natural language phenomena, both from the point of view of pure theory and of determining specific facts about languages, has been substantial. Similarly, techniques for dealing with languages and other sorts of complex information by computer have been considerably extended and the work has been facilitated by the provision of higher-level programming languages and by the availability of faster machines and increased storage. Never-chelsea, the state of scie~eific knowledge of language and of processes for utilizing that knowledge is still such that it is necessary Co take an "engineering approach" to the design of comp~attoual linguistics systems.In using the term "engineering", we mean to indicate that comprouises have to be made in the design of the system between what is theoretically desirable, and what is feasible at the state of the art.Failing to have a complete grammar of the language over which one wishes to have STE, one uses heuristics to determine features that one wants.At the same time, one uses the scientific knowledge available, insofar as that is feasible. One builds and tests model or pilot system to explore problems and techniques and tries co extrapolate the experience to production systems, which themselves are likely ~o have to be "incrementally developed".In any engineering rout ext, evaluation measures are important.These measures allow one Co set criteria for acceptability of designs which are likely always to be imperfect, and to compare alternative systemS.The ultimate evaluation ~easure on which management decisions rest is usually cosc/benefi~ ratio. This can be decermined only after examining the h,~an alternatives and their effectiveness.It is important to be able Co quantify these alternatives, and this is often not done. For instance, it is common to assume chat an automaclc system should not produce errors, whereas humans always do; so the percentage of errors should be determined experimentally in each case and compared.For the evaluation of SIE systems, we would like to propose three measures;(1)Robustness -the percentage of inputs handled.Most real SIE syscm will reject certain inputs, so the rob~tness ~rlll be one minus the parc4n~ags rejected.(2) Accuracy the percentage of those inputs handled which are correctly handled.( 3)Error rate -the percentage of erronao~ entries within incorrectly an handled input.Probably the most difficult aspect of SIE enginearinE is the provision of a safety factoran ability of the system to recognize inputs that it cannot handle.It is clear that one can create a system that is robust and acceptably accurate which has unacceptable error rates for certain inputs.If the system is to be useful, it must be possible aueOmaeicall~ to determine which documents contain unacceptable error rates. It does no good to determine this manually, since chac ~ould nan assentially redoing all of the infor-: scion extraction m~nual 1 y, and the space of 'doc, ments Is not sufficiently uniform or continuous thaC sampling methods would do any good. It appears, then, that the only way that one is going to be able co provide a safety factor is to have a system chat understands enough about the linguistic and nonllngulstlc aspects of the texts to know when it is not understanding (at least most of the time).We shall have more co say about the safety factor when we discuss our system below.for "int ell Igenc" systems is thac they be given some provision for improving their performance by "learning".Generally the problem with chls suggestion is chat the complexity of the learning process is greater than chac of the original system, and it is also unclear in many cases what the machine needs to learn.It nevertheless seems feasible for SIE systems to learn by Interaction with people who are dolng information extraction tasks. The simplest case of this would be .~u 8-mentinK the lexicon, but ochers should be possible. The first step in chls process would be co III build in a sufficient safety factor that most incorrectly handled doc~anents can be explicitly rejected.The second would be Co localize the factors that caused the rejection sufficiently to be able to ask for help from the person doing the manual extraction process.Although we have considered this aspect of SIE development, we have oct made any attempt to implement it.A particular task that would appear to be a candidate for STY, under the criteria given above, ls the extraction of information on chemical reactions from experimental sections of chemical Journals.The Journal chosen for our experimental work was the Journal of Or~anlc Chemistry,.Two examples of reaction descriptions from this Journal are shown in Figure 1 . Both of these examples have a particular type of discourse structure, which we have called the "simple model".The paragraphs in the figure (hut not in the actual texts) are divided into four components: a heading, a synthesis, a work-upm and a characterization.Usually, the heading names the substance that is produced in the reaction, the synthesis porclon describes the steps followed in conducting the reaction, the work-up poL~lon describes the recovery of the substance from the reaction mixture, and the characterization portion presents analytical data suppoL~Ing the structure assignment.Most of the information that we wish to obtain Is in the synthesis port l on, which describes the chemical reactants, reaction condltlons and apparatus. Figure 2 shows the Reaction Information Form (R/F) designed to hold the required reaction information, with information supplied for the two paragraphs illustrated in Figure 1 . One point to notice is that not every piece of data is contained in every reaction description.Ln both examples, corresponding to [nformar~.~,~ l~Ct u~speclfled in the corresponding r'~.rt~.~* des~:rlptions (those shown in Figure L ).The chemical reaction SIE is written in PL/I and runs on a 370/168 under TSO.The t~stlng of certain of the algorithms and heuristics has been done using SNOBOLA (SPITBOL) running under UNIX on a POP LI/70.The choice of PL/I on the 370 was dictated by practical considerations involving the availability of textual m~Cerl~l, the unusual format of that material, and the availability of existing PL/I routines to deal with that format,The prosraum comprising each stage of the system are implemented modularly.Thus the lexical stage involves separate passes for individual lexical categories.In some cases, these are not order-lndependent.In the syntactic phase, the individual modules are "word experts", and in the last (extraction) phase, they are individual "frames" or components of frames, Appendix:
null
null
null
null
{ "paperhash": [ "ciesielski|natural_language_input_to_a_computer-based_glaucoma_consultation_system", "silva|an_application_of_automated_language_understanding_techniques_to_the_generation_of_data_base_elements", "small|word_expert_parsing", "charniak|organization_and_inference_in_a_frame-like_system_of_common_sense_knowledge", "thompson|english_for_the_computer" ], "title": [ "Natural Language Input to a Computer-Based Glaucoma Consultation System", "An Application of Automated Language Understanding Techniques to the Generation of Data Base Elements", "Word Expert Parsing", "Organization and Inference in a Frame-Like System of Common Sense Knowledge", "English for the computer" ], "abstract": [ "A \"Front End\" for a Computer-Based Glaucoma Consultation System is described. The system views a case as a description of a particular instance of a class of concepts called \"structured objects\" and builds up a representation of the instance from the sentences in the case. the information required by the consultation system is then extracted and passed on to the consultation system in the appropriately coded form. A core of syntactic, semantic and contextual rules which are applicable to all structured objects is being developed together with a representation of the structured object GLAUCOMA-PATIENT. There is also a facility for adding domain dependent syntax, abbreviations and defaults.", "This paper defines a methodology for automatically analyzing textual reports of events and synthesizing event data elements from the reports for automated input to a data base. The long-term goal of the work described is to develop a support technology for specific analytical functions related to the evaluation of daily message traffic in a military environment. The approach taken leans heavily on theoretical advances in several disciplines, including linguistics, computational linguistics, artificial intelligence, and cognitive psychology. The aim is to model the cognitive activities of the human analyst as he reads and understands message text, distilling its contents into information items of interest to him, and building a conceptual model of the information conveyed by the message. This methodology, although developed on the basis of a restricted subject domain, is presumed to be general, and extensible to other domains.Our approach is centered around the notion of \"event\", and utilizes two major knowledge sources: (1) a model of the sublanguage for event reporting which characterizes the message traffic, and (2), a model of the analyst-user's conceptualization of the world (i.e., a model of the entities and relations characteristic of his world).", "This paper describes an approach to conceptual analysis and understanding of natural language in which linguistic knowledge centers on individual words, and the analysis mechanisms consist of interactions among distributed procedural experts representing that knowledge. Each word expert models the process of diagnosing the intended usage of a particular word in context. The Word Expert Parser performs conceptual analysis through the interactions of the individual experts, which ask questions and exchange information in converging on a single mutually acceptable sentence meaning. The Word Expert theory is advanced as a better cognitive model of natural language understanding than the traditional rule-based approaches. The Word Expert Parser models parts of the theory, and the important issues of control and representation that arise in developing such a model from the basis of the technical discussion. An example from the prototype LISP implementation helps explain the theoretical results presented.", "My goals have not changed since (Charniak 72). I am still interested in the construction of a computer program which will answer questions about simple narration (e.g. children's stories). More exactly, if one makes the somewhat unrealistic division of the problem into (a) going from natural language to a convenient internal representation, and (b) being able to \"reason\" about the information in the story in order to answer questions, my interests are clearly in the latter section. I will take it as given that such reasoning requires large amounts of \"common sense knowledge\" about the topics mentioned in the text, so I will not demonstrate this point. (However it should come out incidentally from the examples used to demonstrate other points.) To reason with this knowledge requires that it be organized, by which I simply mean it must be structured so that the system can get at necessary knowledge when it is needed, but that unnecessary knowledge will not clog the system with the all too familiar \"combinatorial explosion\". I will start with my current thoughts on organization.", "What about English as a programming language? Few would question that this is a desirable goal. On the other hand, I dare say every one of us has rather deep reservations both about its feasibility and about a number of problems that it entails. This paper presents a point of view which gives some clarity to the relationship between English and programming languages. This point of view has found substance in an experimental system called DEACON. The second paper in this session will describe the specific DEACON system and its capabilities." ], "authors": [ { "name": [ "V. Ciesielski" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Silva", "C. Montgomery", "D. Dwiggins" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Small" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Eugene Charniak" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "F. B. Thompson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null ], "s2_corpus_id": [ "220401", "10736438", "5317323", "10595802", "16173809" ], "intents": [ [], [], [], [ "methodology" ], [ "methodology" ] ], "isInfluential": [ false, false, false, false, false ] }
Problem: The economic problem of entering original information into databases, particularly information available in restricted types of natural language text, such as chemical literature experimental sections. Solution: Specialized Information Extraction (SIE) systems are proposed to automatically extract information about chemical reactions from natural language text and create a data structure containing the relevant information.
504
0.013889
null
null
null
null
null
null
null
null
6c2ceca9c00f75cc75bfcd9d3010ae2f71efc6a4
326097
null
Interactive Natural Language Problem Solving: A Pragmatic Approach
I NTRODUCT ION A class of natural language processors is described which allow a user to display objects of interest on a computer terminal and manipulate them via typed or spoken English sentences.
{ "name": [ "Biermann, A. and", "Rodman, R. and", "Ballard, B. and", "Betancourt, T. and", "Bilbro, G. and", "Deas, H. and", "Fineman, L. and", "Fink, P. and", "Gilbert, K. and", "Gregory, D. and", "Heidlage, F." ], "affiliation": [ null, null, null, null, null, null, null, null, null, null, null ] }
null
null
First Conference on Applied Natural Language Processing
1983-02-01
33
24
null
null
null
, H. Deas , L.This paper concerns itself with the implementation of the voice input facility using an automatic speech recognizer, and the touch input facility using a touch sensitive screen.To overcome the high error rates of the speech recognizer under conditions of actual problem solving in natural language, error correction software has been designed and is described here. Also described are problems involving the resolution of voice input with touch input, and the identification of the intended referents of touch input.To measur~ system performance we have considered two classes of factors: the various conditions of testing, and the level and quality of training of the system user.In the paper a sequence of five different testing situations is observed, each one resulting in a lowering of system performance by several percentage points below the previous one.A training procedure for potential users is described. and an experiment is discussed which utilizes the training procedure to enable users to solve actual non-trivial problems using natural language voice communication. including arbitrarily deep nesting of noun groups, extensive conjunction processing, user defined imperative verbs, and looping and branching features.More recently, a domain independent abstraction of the NLC system has been constructed and now is being specialized to handle a text processlng task.In this system, text can be displayed and modified or formatted with natural language commands.Current work emphasizes the addition of voice input, voice output, and a touch sensitive display screen.Speech recognition is being done on an experimental basis with the Nippon Electric DP-200 Connected Speech Recognizer in both discrete and connected speech modes, and with the Votan Corporation V-SO00 Development System.The touch sensitive screen being used is a Carroll touch panel mounted on a 19-inch color monitor.Voice response is also provided by the Votan V-5000 which assembles and vocalizes digitally recorded human voice messages. The work has progressed to the point where OUr natural language matrix computer NLC is operative under voice control using the DP-200 and the text processing system is beginning to function using the V-5000 speech recognizer.The touch panel interface and voice response systems are still in the design phase.The goal of the project is to make possible voice and touch interactions of the following kind:Retrieve file Budget83. Prompts and error messages will be given by voice response, gystem design is aimed at allowing fast interactive control of the objects on the screen while the user maintains uninterrupted eye contact with th~ events as they happen.A continuous program of human factors testing has been maintained by the project in order to build a realistic view of potential users and to measure Progress in achieving usability.For example, in a test of the matrix computation system with typed input, twenty-three subjects solved problems similar to those that might be assigned in a first course in programming (Biermann, Ballard, and Sigmon [7] ). In this test, the NLC system correctly processed 81 percent of the sentences and users were quite satisfied with its general performance.Other tests of the system are described in Fink [14] and Geist et el.[IS].In another test (Fineman [13]), a simulator for a voice driven office automation system was used to obtain data on user behaviors when problem solving is with discrete and slow connetted speech.It was found that users quickly adapted their speech to the required discipline of slow, methodical, and simple sentences which can be recognized by machine.Since the data obtained in any system test is heavily dependent on the amount and kind of training given to subjects, it is necessary to have a standardlzed training procedure. In the current work, a voice tutorial has been developed for training users to use a voice interactive system (Deas [Ii] ).reports on the current status of these projects with emphasis on system design, speech input facilities and their performance, the touch input system and human factors considerations. predictable, e.g. "by" for "five", "and" for "add", or "up" for "of".We If a given word slot does not contain the correct token, the substituted word can be added to the appropriate synophone set for that subject. Thereafter, if the same substitution error recurs during a session with that subject, the correct word will be included in the synophone list for that word slot.The occurrence of one or more rejections in a sentence almost always results in a request for repetition.However, we are designing a number of facilities to handle rejections.In some cases, the rejected word can be determined from context, and processing can continue uninterrupted.Otherwise, the current plan is to handle a single rejection by returning an audio response that repeats all of the sentence with the word "what" in place of the rejected element. The speaker will then .be able to choose to repeat the rejected word or, in case other errors are apparent, to repeat the entire utterance.In cases of multiple rejection errors, the speaker is requested to repeat the entire utterance.In all cases previous utterances will not be. discarded. Move that there and cover it.with a point to the object to be moved and covered.A pointing ability would fit in very nicely with voice driven NLC and our pro-Ject includes a touch sensitive screen so that the user can say "double this", point to a row, and cause the processor to double every element in that row. More complex sentences such as Add this row to that row putting the results here. (with three touchee) also become possible. accompanied by three touches.The strategy here is to -air touches and utterances in the order given by the user.In the last example all touches func-tioned to establish focus or resol~=e no,~n group reference.If the emphasis function of touch is mixed in, a more difficult situation arises.If three touches accompany Add this entry to the first row and put the result here. then the second touch was presumably to emphasize the first row or even to establish a rhythm of touching.In any case the facility to match touches with nondeictic expressions iS needed. If only two touches accompany this last sentence then the focusing function should take precedence, and the touches should be matched with "this entry" and "here."The situation is made even more complex by the ability to establish focus verbally.In NLC the user can sayConsider row four.Double that row. and the expression "that row" will refer to row four. ~f the same utterance is accompanied by a touch to a row other than four a potential conflict results. Our strategy is to give precedence to touch, since it is the more immediate focussing mechanism.Thus the sequence Consider row four. Double that row. (touching row three) will result in the doubling of row three. (1)Lists of words are read in tests performed by the manufacturer.(2)Lists of words are read in our laboratory.(3)Sentences are read in our laboratory. (discrete or connected)
null
null
Main paper: fineman t g. bilbro: , H. Deas , L.This paper concerns itself with the implementation of the voice input facility using an automatic speech recognizer, and the touch input facility using a touch sensitive screen.To overcome the high error rates of the speech recognizer under conditions of actual problem solving in natural language, error correction software has been designed and is described here. Also described are problems involving the resolution of voice input with touch input, and the identification of the intended referents of touch input.To measur~ system performance we have considered two classes of factors: the various conditions of testing, and the level and quality of training of the system user.In the paper a sequence of five different testing situations is observed, each one resulting in a lowering of system performance by several percentage points below the previous one.A training procedure for potential users is described. and an experiment is discussed which utilizes the training procedure to enable users to solve actual non-trivial problems using natural language voice communication. including arbitrarily deep nesting of noun groups, extensive conjunction processing, user defined imperative verbs, and looping and branching features.More recently, a domain independent abstraction of the NLC system has been constructed and now is being specialized to handle a text processlng task.In this system, text can be displayed and modified or formatted with natural language commands.Current work emphasizes the addition of voice input, voice output, and a touch sensitive display screen.Speech recognition is being done on an experimental basis with the Nippon Electric DP-200 Connected Speech Recognizer in both discrete and connected speech modes, and with the Votan Corporation V-SO00 Development System.The touch sensitive screen being used is a Carroll touch panel mounted on a 19-inch color monitor.Voice response is also provided by the Votan V-5000 which assembles and vocalizes digitally recorded human voice messages. The work has progressed to the point where OUr natural language matrix computer NLC is operative under voice control using the DP-200 and the text processing system is beginning to function using the V-5000 speech recognizer.The touch panel interface and voice response systems are still in the design phase.The goal of the project is to make possible voice and touch interactions of the following kind:Retrieve file Budget83. Prompts and error messages will be given by voice response, gystem design is aimed at allowing fast interactive control of the objects on the screen while the user maintains uninterrupted eye contact with th~ events as they happen.A continuous program of human factors testing has been maintained by the project in order to build a realistic view of potential users and to measure Progress in achieving usability.For example, in a test of the matrix computation system with typed input, twenty-three subjects solved problems similar to those that might be assigned in a first course in programming (Biermann, Ballard, and Sigmon [7] ). In this test, the NLC system correctly processed 81 percent of the sentences and users were quite satisfied with its general performance.Other tests of the system are described in Fink [14] and Geist et el.[IS].In another test (Fineman [13]), a simulator for a voice driven office automation system was used to obtain data on user behaviors when problem solving is with discrete and slow connetted speech.It was found that users quickly adapted their speech to the required discipline of slow, methodical, and simple sentences which can be recognized by machine.Since the data obtained in any system test is heavily dependent on the amount and kind of training given to subjects, it is necessary to have a standardlzed training procedure. In the current work, a voice tutorial has been developed for training users to use a voice interactive system (Deas [Ii] ).reports on the current status of these projects with emphasis on system design, speech input facilities and their performance, the touch input system and human factors considerations. predictable, e.g. "by" for "five", "and" for "add", or "up" for "of".We If a given word slot does not contain the correct token, the substituted word can be added to the appropriate synophone set for that subject. Thereafter, if the same substitution error recurs during a session with that subject, the correct word will be included in the synophone list for that word slot.The occurrence of one or more rejections in a sentence almost always results in a request for repetition.However, we are designing a number of facilities to handle rejections.In some cases, the rejected word can be determined from context, and processing can continue uninterrupted.Otherwise, the current plan is to handle a single rejection by returning an audio response that repeats all of the sentence with the word "what" in place of the rejected element. The speaker will then .be able to choose to repeat the rejected word or, in case other errors are apparent, to repeat the entire utterance.In cases of multiple rejection errors, the speaker is requested to repeat the entire utterance.In all cases previous utterances will not be. discarded. Move that there and cover it.with a point to the object to be moved and covered.A pointing ability would fit in very nicely with voice driven NLC and our pro-Ject includes a touch sensitive screen so that the user can say "double this", point to a row, and cause the processor to double every element in that row. More complex sentences such as Add this row to that row putting the results here. (with three touchee) also become possible. accompanied by three touches.The strategy here is to -air touches and utterances in the order given by the user.In the last example all touches func-tioned to establish focus or resol~=e no,~n group reference.If the emphasis function of touch is mixed in, a more difficult situation arises.If three touches accompany Add this entry to the first row and put the result here. then the second touch was presumably to emphasize the first row or even to establish a rhythm of touching.In any case the facility to match touches with nondeictic expressions iS needed. If only two touches accompany this last sentence then the focusing function should take precedence, and the touches should be matched with "this entry" and "here."The situation is made even more complex by the ability to establish focus verbally.In NLC the user can sayConsider row four.Double that row. and the expression "that row" will refer to row four. ~f the same utterance is accompanied by a touch to a row other than four a potential conflict results. Our strategy is to give precedence to touch, since it is the more immediate focussing mechanism.Thus the sequence Consider row four. Double that row. (touching row three) will result in the doubling of row three. (1)Lists of words are read in tests performed by the manufacturer.(2)Lists of words are read in our laboratory.(3)Sentences are read in our laboratory. (discrete or connected) Appendix:
null
null
null
null
{ "paperhash": [ "hendrix|transportable_natural-language_interfaces_to_databases", "thompson|shifting_to_a_higher_gear_in_a_natural_language_system", "haas|an_approach_to_acquiring_and_applying_knowledge", "tennant|experience_with_the_evaluation_of_natural_language_question_answerers", "harris|the_robot_system:_natural_language_processing_applied_to_data_base_query", "heidorn|natural_language_dialogue_for_managing_an_on-line_calendar", "waltz|an_english_language_question_answering_system_for_a_large_relational_database", "hendrix|developing_a_natural_language_interface_to_complex_data", "herdrix|human_engineering_fcr_applied_natural_language_processing", "plath|request:_a_natural_language_question-answering_system", "petrick|on_natural_language_based_computer_systems", "mylopoulos|torus:_a_natural_language_understanding_system_for_data_management", "woods|motivation_and_overview_of_speechlis:_an_experimental_prototype_for_speech_understanding_research", "damerau|operating_statistics_for_the_transformational_question_answering_system", "egly|cognitive_style,_categorization,_and_vocational_effectss_on_performance_of_rel_database_users", "ballard|programming_in_natural_language:_“nlc”_as_a_prototype", "hershman|user_performance_with_a_natural_language_query_system_for_command_control." ], "title": [ "Transportable Natural-Language Interfaces to Databases", "Shifting to a higher gear in a natural language system", "An Approach to Acquiring and Applying Knowledge", "Experience with the Evaluation of Natural Language Question Answerers", "The ROBOT System: Natural language processing applied to data base query", "Natural Language Dialogue For Managing An On-Line Calendar", "An English language question answering system for a large relational database", "Developing a natural language interface to complex data", "Human engineering fcr applied natural language processing", "Request: a natural language question-answering system", "On Natural Language Based Computer Systems", "TORUS: a natural language understanding system for data management", "Motivation and overview of SPEECHLIS: An experimental prototype for speech understanding research", "Operating Statistics for the Transformational Question Answering System", "Cognitive style, categorization, and vocational effectss on performance of REL database users", "Programming in natural language: “NLC” as a prototype", "User Performance with a Natural Language Query System for Command Control." ], "abstract": [ "Abstract : Several computer systems have now been constructed that allow users to access databases by posing questions in natural languages, such as English. When used in the restricted domains for which they have been especially designed, these systems have achieved reasonably high levels of performance. However, these systems require the encoding of knowledge about the domain of application in complex data structures that typically can be created for a new database only with considerable effort on the part of a computer professional who has had special training in computational linguistics and the use of databases. This paper describes initial work on a methodology for creating natural-language processing capabilities for new databases without the need for intervention by specially trained experts. The approach is to acquire logical schemata and lexical information through simple interactive dialogues with someone who is familiar with the form and content of the database, but unfamiliar with the technology of natural-language interfaces. A prototype system using this methodology is described and an example transcript is presented.", "We have completed the development of the REL System, a system for communicating with the computer in natural language concerning a relational database. We have been using that system in a series of experiments On how people actually do communicate in solving an intellectual task. These experiments, together with our general experience with REL, and related work elsewhere, have led us to the specification arid development of a new system, the POL (Problem Oriented Language) System. POL is an evolutionary extension of REL, preserving what has worked, and extending and adding new capabilities to meet observed needs. These improvements include more responsive diagnostics, handling of sentence fragments, inter knowledge base communications, and new facilities for building and extending the knowledge bases of users. This paper introduces POL.", "The problem addressed in this paper is how to enable a computer system to acquire facts about new domains from tutors who are experts in their respective fields, but who have little or no training in computer science. The information to be acquired is that needed to support question-answering activities. The basic acquisition approach is \"learning by being told.\" We have been especially interested in exploring the notion of simultaneously learning not only new concepts, but also the linguistic constructions used to express those concepts. As a research vehicle we have developed a system that is preprogrammed with deductive algorithms and a fixed set of syntactic/semantic rules covering a small subset of English. It has been endowed with sufficient seed concepts and seed vocabulary to support effective tutorial interaction. Furthermore, the system is capable of learning new concepts and vocabulary, and can apply its acquired knowledge in a prescribed range of problem-solving situations.", "Research in natural language processing could be facilitated by thorough and critical evaluations of natural language systems. Two measurements, conceptual and linguistic completeness, are defined and discussed in this paper. Testing done on two natural language question answerers demonstrated that the conceptual coverage of such systems should be extended to better satisfy the needs and expectations of users.", "In the early 1970's the natural language processing techniques developed within the field of artificial intelligence (AI) made important progress. Within certain restricted micro worlds of discourse it became possible to process a reasonably large class of English. These techniques have now been applied to the real micro world of data base query, allowing for information to be extracted from data bases by asking ordinary English questions. This paper discusses the importance of true natural language data base query and describes the ROBOT system, a high performance production level system already installed in several real world environments. The specific data structure requirements of the ROBOT system are discussed, as well as an extended type of data inversion that provides precisely the functionality required by the natural language parser.", "This paper describes a project for studying the feasibility of developing systems which accomplish typical office tasks by means of human-like communication with the user. An actual dialogue with the initial version of a system that is being built for scheduling activities such as meetings is presented and then is used as a source of examples in explaining the operation of the system. The knowledge network is described, and the use of Augmented Phrase Structure Grammars for both the analysis and generation of English utterances in this system is discussed.", "By typing requests in English, casual users will be able to obtain explicit answers from a large relational database of aircraft flight and maintenance data using a system called PLANES. The design and implementation of this system is described and illustrated with detailed examples of the operation of system components and examples of overall system operation. The language processing portion of the system uses a number of augmented transition networks, each of which matches phrases with a specific meaning, along with context registers (history keepers) and concept case frames; these are used for judging meaningfulness of questions, generating dialogue for clarifying partially understood questions, and resolving ellipsis and pronoun reference problems. Other system components construct a formal query for the relational database, and optimize the order of searching relations. Methods are discussed for handling vague or complex questions and for providing browsing ability. Also included are discussions of important issues in programming natural language systems for limited domains, and the relationship of this system to others.", "Aspects of an intelligent interface that provides natural language access to a large body of data distributed over a computer network are described. The overall system architecture is presented, showing how a user is buffered from the actual database management systems (DBMSs) by three layers of insulating components. These layers operate in series to convert natural language queries into calls to DBMSs at remote sites. Attention is then focused on the first of the insulating components, the natural language system. A pragmatic approach to language access that has proved useful for building interfaces to databases is described and illustrated by examples. Special language features that increase system usability, such as spelling correction, processing of incomplete inputs, and run-time system personalization, are also discussed. The language system is contrasted with other work in applied natural language processing, and the system's limitations are analyzed.", "Human engineering features for enhancing the usability of practical natural language systems are described. Such features include spelling correction, processing of incomplete (elliptical) input?, of the underlying language definition through English queries, and their ability for casual users to extend the language accepted by the system through the use of synonyms and peraphrases. All of the features described are incorporated in LJFER, -\"applications-oriented system for creating natural language interfaces between computer programs and casual USERS LJFER's methods for the mroe complex human engineering features presented.", "REQUEST [1,2,3] is an experimental Restricted English Question-answering system that has been implemented in LISP 1.5 and runs under an interactive operating system in one million bytes of virtual storage. It is currently capable of analyzing and answering a variety of English questions, spanning a significant range of syntactic complexity, with respect to a small Fortune-500-type data base.", "Some of the arguments that have been given both for and against the use of natural languages in question-answering and programming systems are discussed. Several natural language based computer systems are considered in assessing the current level of system development. Finally, certain pervasive difficulties that have arisen in developing natural language based systems are identified, and the approach taken to overcome them in the REQUEST (Restricted English QUESTion-Answering) System is described.", "This paper describes TORUS, a natural language understanding system which serves as a front end to a data base management system in order to facilitate communication with a casual user. The system uses a semantic network for \"understanding\" each input statement and for deciding what information to output in response. The semantic network stores general knowledge about the problem domain, in this case \"student files\" and the educational process at the University of Toronto, along with specific information obtained during the dialogue with the user. A number of associated functions make it possible to integrate the meaning of an input statement to the semantic network, and to select a portion of the semantic network which stores information that must be output. A first version of TORUS has been implemented and is currently being tested.", "SPEECHLIS is a research prototype of an intelligent speech understanding system which makes use of advanced techniques of artificial intelligence, natural language processing, and acoustical and phonological analysis in an integrated way to determine the interpretation of continuous speech utterances. This paper describes a number of the characteristics of the speech understanding task which influence the ways in which syntactic, semantic, pragmatic and lexical knowledge interact with acoustical and phonological information in the process of understanding speech utterances. The focus is on what the different knowledge sources have to contribute at different points in the analysis and the organization of a computer system to combine these different sources of information into an integrated system.", "This paper presents a statistical summary of the use of the Transformational Question Answering (TQA) system by the City of White Plains Planning Department during the year 1978. A complete record of the 788 questions submitted to the system that year is included, as are separate listings of some of the problem inputs. Tables summarizing the performance of the system are also included and discussed. In general, performance of the system was sufficiently good that we believe that the approach being followed is a viable one, and are continuing to develop and extend the system.", "Twelve subjects from two job categories, sales engineers and programmer analysts, used an REL ENGLISH database to answer a set of questions. These questions were designed to require successively more complex interactions. The database contained Hewlett-Packard's Condensed Order Records, which were pertinent to the jobs of the sales engineers.All of the subjects were given a battery of cognitive tests measuring cognitive style and pattern extrapolation skills prior to using the database. They also received a brief training session on the structure of the database.Analysis of the subjects interactions with the REL ENGLISH database, particularly analysis of the errors made, showed: first, that cognitive style is significantly correlated with the number of questions successfully completed; second, that while sales engineers were able to access all levels of the hierarchy in the database, programmer analysts had significantly more difficulty accessing data from higher levels than they did with data from the same or lower levels than the standard, entry level; and third, that programmer analysts had less difficulty with the fixed-format, programming-language-like features of REL ENGLISH, while sales engineers has less difficulty with the free-format, English-like features of REL ENGLISH.These findings suggest that quasi-natural language database interfaces are appropriate for nonprogrammers who have a field-independent cognitive style and who already are domain experts in the area covered by the database.", "The state of the art in computational linguistics has progressed to the point where it is now possible to process simple programs written in natural language. This report describes a natural language programming system called NLC which enables a computer user to type English commands into a display terminal and watch them executed on example data shown on the screen. The system is designed to process data stored in matrices or tables, and any problem which can be represented in such structures can be handled if the total storage requirements are not excessive.", "Abstract : Natural language query systems have been developed as potential aids to command control data retrieval processes involving large data bases. One such system, LADDER (for Language Access to Distributed Data with Error Recovery), was studied in order to identify significant performance characteristics associated with its use in a Navy command control environment. Ten officers received moderate training in LADDER and subsequently employed it in a search and rescue scenario. Both system and user performance were examined. Basic patterns of usage were established, and troublesome syntactic expressions were identified. Design recommendations for the man-computer interface in command control query systems are discussed. (Author)" ], "authors": [ { "name": [ "G. Hendrix", "W. H. Lewis" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "B. H. Thompson", "F. B. Thompson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Norman Haas", "G. Hendrix" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "H. Tennant" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "L. R. Harris" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "George E. Heidorn" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Waltz" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Hendrix", "E. Sacerdoti", "Daniel Sagalowicz", "Jonathan Slocum" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Gary G. Herdrix" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Warren J. Plath" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Stanley R. Petrick" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Mylopoulos", "A. Borgida", "Philip R. Cohen", "Nick Roussopoulos", "John Tsotaos", "Harry K. T. Wong" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. Woods" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "F. J. Damerau" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Egly", "K. Wescourt" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "B. Ballard", "A. Biermann" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. L. Hershman", "R. T. Kelly", "Harold G Miller" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "9344388", "18071374", "7704586", "31711072", "1732335", "17770409", "18227465", "15391397", "59814145", "8206864", "13293588", "10642322", "62770701", "22605062", "17137667", "2527577", "60738014" ], "intents": [ [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [] ], "isInfluential": [ false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false ] }
- Problem: The paper addresses the implementation of natural language processors for voice and touch input, focusing on overcoming high error rates of speech recognition and resolving issues related to voice and touch input integration. - Solution: The paper proposes the design and description of error correction software for speech recognition, as well as addressing problems related to the resolution of voice and touch input, including the identification of intended referents and the implementation of various features such as conjunction processing and user-defined imperative verbs.
504
0.047619
null
null
null
null
null
null
null
null
3f5132e66afdd8a366fcf55e5e9b721a34168e9b
812442
null
Parsing With Logical Variables
Logic based programming systems have enjoyed an increasing popularity in applied AI work in the last few years. One of the contributions to Computational Linguistics made by the Logic Programming Paradigm has been the Definite Clause Grammar. In comparing DCG's wlth previous parsing mechanisms such as ATN's, certain clear advantages are seen. We feel that the most important of these advantages are due to the use of Logical Variables with Unification as the fundamental operation on them. To illustrate the power of the Logical Variable, we have implemented an experimental ATN system which treats ATN registers as Logical Variables and provides a unification operation over them. We would llke to simultaneously encourage the use of the powerful mechanisms available in DCG's, and demonstrate that some of these techniques can be captured without reference to a resolution theorem prover.
{ "name": [ "Finin, Timothy W. and", "Palmer, Martha Stone" ], "affiliation": [ null, null ] }
null
null
First Conference on Applied Natural Language Processing
1983-02-01
18
5
null
Logic based programming systems have enjoyed an increasing popularity in applied AI work in the last few years.One of the contributions to Computational Linguistics made by the Logic Programming Paradigm has been the Deflnite Clause Grammar.An excellent introduction to this formalism can be found in [Perelra] in which the authors present the formalism and make a detailed comparison to Augmented Transition Networks as a means of both specifying a language and parsing sentences in a language.We feel Chat the major strengths offered by the DCG formalism arise from its use of Logical variables with Unification as the fundamental operation on them. These techniques can be abstracted from the theorem proving paradigm and adapted to other parsing systems (see [Kay] and [Bossie] ). We have implemented an experimental ATN system which treats ATN registers as Logic variables and provides a unification operation over them.The DCG formalism provides a powerful mechanism for parsing based on a context free grammar.The grammar rule S -> NP VP can be seen as the universally quantified logical statement,For all x, y, and z : N'P(x) /\ VP(y) /\ Concatenate(x,y,z) -> S(z).where "x" and "y" represent sequences of words which can be concatenated together to produce a sentence, "S." Prolog, a progra~mulng language baaed on predicate calculus, allows logical statements to be input as Horn clauses in the foilowlng (reversed) form: s(Z) <-np(X),vp(Y),Concatenate(X,Y,Z).The resolution theorem prover that "interprets" the Prolog clauses would take the oegatlon of S as the goal and try and produce the null clause.Thus the preceding clause can be interpreted procedurally as, "To establish goal S, try and establish subgoals, NP, VP and Concatenate." DCG's provide syntactic sugar on top of Prolog so that the arrow can be reversed and the "Concatenate" predicate can be dispensed with.The words in the input string are looked at sequentially each time a "[Word]" predicate is executed which implicitly tests for concatenation (see figure [ ).DCG's allow grammar rules to be expressed very cleanly, while still allowing ATN-type augmentation through the addition of arbitrary tests on the contents of the variables.and Warren argue that the DCG formalism is well suited for specifying a formal description of a language and also for use with a parser.In particular, they assert that it is a significant advance over an ATN approach on both philosophical and practical grounds.Their chief claims are that:[. DCGs provide a common formalism for theoretlcal work in Computational Linguistics and for writing efficient natural language processors.The rule based nature of a DCG result %n systems of greater clarity and modularity.
null
structures that can be built in the course of analyzing a constituent. [n particular the DCG formalism makes it easy to create structures that do not follow the structure implied by the rules of a conscltuenc and easy Co create a structure for a constituent thac depends on items not yec encountered in the sentence. The flrsC two points have been discussed in the past whenever the ATN formalism is compared with a rule-based grammar (see [PracC] , [Heldorn] , [Codd] , or [Bates] ).The outcome of such discussions vary.It is safe Co say chat how one feels about these points depends quite heavily on past experience in using the two formalisms.We find the third point co be well founded, however.Ic is clear chac the DCG differs moeC from previous rule-baaed parsing systems in ice inclusion of Logical variables.These We have built an experimental ATN system which can crest ATN registers as Logical variables and, we feel, capture these important strengths offered by the DCG formalism in the ocherwlse standard ATN formalism.The second section gives a more detailed desctpCton of DCG's and presents a simple grammar. In the third section we show am ATN grammar which is "equivalent" to the DCC grammar and discuss the source of Its awkwardness.The fourth section chert presence an ATN formalism extended co include viewing ATN registers as Logical variables which are subject to the standard unlficacloa operaclon. The final section concludes this note and suggests that logical variables might be fruitfully introduced into ocher parsing algorithms and systems.rip(X, Pl, P) -> dec(X, P2, PI, P), n(X, P3), relclauee(X, P3, P2).rip(X, P, P) -> name(X).vp(X, P) -> tranev(X, Y, Pl), np(Y, Pl, P). vp(X, P) -> tncransv(X, P). Figure 2 gives a sentence in the language recognized by thls grammar together wlth the associated surface syntactic structure and the semantic structure built by the grammar.)) -> [chat], vp(X, P2). relclauae(X, P, P) -> [], dec(X, Pl, P2, (ForAll X (-> P! P2))) -> [everyl. dec(X, Pl, P2, (ForSome X (And Pt P2))) -> [a]. n(X, (man X)) -> [,u.]. n(X, (woun X)) -> [wom~.]. n(X, (dog X)) -> [dog]. name(John) -> [John] name(mary) -> [mary] namI(fldo) -> [fido] transv(X, Y, (loves X Y)) -> [loves]. transv(X, Y, (breaches X Y)) -> [breathesl. Incranev(X, (loves X) -> [loves]. lncransv(X, (lives X).-> [lives]. incranev(X, (breathes X) -> [breathes].A Sentence, Structure and Representation SENTENCE "John loves every woman who breathes"The way in which unification produces the appropriate bindings for this example ls actually quite subtle, and requires a detailed analysis of the parse, as represented by the refutation graph in Figure 3 .For the the refutation graph the Prolog clauses have been put into claueal normal form.Some liberties have been taken with the ordering of the predicates in the interest of compactness.In trying to establish the "s(P)" goal, the "np(X,Pt,P)" is first attempted.The "PI" is an empty variable that is a "place-holder" for predicate information chat will come from the verb. It will "hold" a place in the sentence structure that will be provided by =he determiner. "P" is destined to contain the sentence structure. The ~/ (P2 is bound Co "breathes(Y)") first "np" clause will be matched, but it will eventually fall since no determiner is present. The second "rip" clause will'succeed, having forever identified the contents of "Pl" with the contents of "P, " whatever they may be.Since there is no determiner in the first noun phrase, there is no quantification information.The quantlflcatlonal structure must be supplied by the verb phrase, so the structure for the sentence will be the same as the structure for the verb phrase. The variable "X" will be bound to "John".In trying co establish "vp(John,Pl), " the first "wp" clause w(ll succeed, since "loves" is a transitive verb. It is important not to get the variables confused.Within the "vp" clause our original "Pl" has been renamed "P" and and we have a new "PI" variable that will be Instantlated to "(loves John Y)" by the success of the "=canny" goal. The "Y" Is as yet undetermined, but we can see that It will be supplied by the next "np(Y,(loves John ¥),P)" goal. It shows great foresight on "transv's" part to pass back a variable in such a way that it will correspond to a variable that has already been named.This pattern is repeated throughout the grammar, with powerfull repurcusslons.It is even clearer In the success of the "np(Y,(loves John Y),P)" goal, where the presence of the determiner "every" causes "P" to be bound to (Forall Y (-> PI (loves John Y))This "P" is of course the "P" mentioned above which has been waiting for the verb phrase to supply It with a quantlflcatlonal structure.As the relative clause for this "up" is processed, the "PI" embedded in this structure, (our second new PII), is eventually bound to "(And (woman Y) (breaches Y))" giving us the full structure:(Forall Y (-> (And (woman Y) (breaches Y)) (loves John Y)))This is whac is returned as the binding to the first "Pl" in the original "vp(X,Pt)" goal. Since our "np(X,P[,F)" goal identified "P" wlth "Pl, " our "s(P)" goal succeeds with the binding of (Forall Y (=> (And (woman Y) (breathes Y)) (loves John Y)))for "P" -the final structure built for the sentence.In following the execution of this grammar it becomes clear that very ~trong predictions are made about which parrs of the parse will be supplying particular ~ypes of information.Determiners will provide the quanClElers for the propositional ~tructure of the sentence, the flrsc noun phrase and the noun phrase following the verb will be the two participants in ~he predicate implied by the verb, etc. Obviously this is a simple grammar, but the power of the logical variables can only be made use of through the encoding of these strong linguistic assumptions. DCG's seem to provide, a =echanlsm well qualified for expressing such assumptions and then executing them. Coming up with the assumptions in the first place Is, of course, something of a major task In itself. Figure 4 shows an ATN grammar which is the "equivalent" of the DCG grammar given in Figure t . The format used to specify the grammar is the one described in [flninl] and [finln2] . There are only two minor ways that this particular formalism differs from the standard ATN formalism described in [WoodsY0] or [Bates] . First, the dollar sign Character (i.a. $) followed by the name of a register stands for the contents of that register. Second, the function DEFATN defines a set of arcs, each of which is represented by a llst whose first element is the name of the state and whose remaining elements are the arcs emanating from the state.In addition, this example uses a very simple lexical manager in which a word has (1) a set of syntactic categories to which It belongs (2) an optional set of features and (3) an optional root form for the word. These attributes are associated with a word ualng the function LEX, which supplies appropriate default values for unspecified arguments.In the standard ATN model, a PUSH arc invokes a sub-computatlon which takes no arguments and, if successful, returns a single value.One can achieve the affect of passing parameters to a sub-computatlon by giving a register an initial value via a SENDR register setting action. There are two methods by which one can achieve the effect of returning more than one value from a sub-computatlon. The values to be returned can be packaged into a llst or the LIFTR register setting action can be used to directly set values in the higher level computation.This grammar makes use of SENDR and LIFTR to pass parameters into and ouC of ATN computations and thus the actions of the DCC example.Consider what must happen when looking for a noun phrase.The representation for a NP will be a predicate if the noun phrase is indefinite (i.e. "a man" becomes (man X)) or a constant If the noun phrase is a name (l.e. "John" becomes John). in this simple language, a NP is dominated by a either a sentence (if it is the subject) or by a verb phrase (if It ts the object).[n either case, the NP also determines, or must agree with, the overall Similarly, when we are lookzn8 for a verb phrase, we must know what token (i.e. variable name or constant) represents the subject (if the verb phrase is dominated by a S) or the head noun (if the verb phrase acts as a relative clause). This is done by sanding the subJvar register in the sub-computation the appropriate value via the SENDR function.The techniques used to quancificatlon and build an overall sentence structure in chls ATN grammar are similar co those used in th~ BBN Lunar Grammar [Woods72] .This heavy use of SENDR and LIFTR co communicate between levels in the grammar makes the ATN grammar cumbersome and difficult to unaerstand. In the next secton we investigate treating ATN registers as logic variables and providing a unification operation on them.
Although the previous &TN grammar does the Job, it is clearly awkward.We can achieve much of the elegance of the DCG example by treating the ATN registers as logical variables and including a unification operation on them.We will call such registers ATN Variables. A symbol preceded by a "$" represents an ATN Variable and "*" will again stand for ~he current constituenE.Thus in the state S in the grammar:(S (PUSH NP (UNIFY "($SUBJVAR gYP $S) *) (TO S/SUBJ))) the parser pushes to the state NP co parse a noun phrase.If one is found, it will pop back wi~h a value which will then be unified wi~h the expression (SSUBJVAR $VF $S).If this unification is successful, the parser will advance to state S/SUBJ.If It fails, the arc is blocked causing the parser to backtrack into the NP network.Although our grammar succeeds in mimicking the behavlour of the DCG, there are some open questions Involvlng the use of unification [n parsing natural languages.An examination of ~his ATN grammar shows that we are really using unification as a method of passing parameters.The full power of unlficatton ls noc needed In this example since the 67 grammar does not try to find "most-general unifiers" for complicated sets of terms.Most of the time it is simply using unification to bind a variable to the contents of another variable. The most sophisticated use involves binding a variable in a term to another copy of that term which also has a variable to be bound as in the "a man loves a woman" example in Figure 6 .But even this binding is a simple one-way application of standard unification.St is not clear to the authors whether this is due to the simple nature of the grammars involved or whether it is an inherent property of the dlrectedneee of natural language parsing.A situation where full unification eight be required would arise when one is looking for a constituent matching some partial description.For example, suppose we were working with a syntactic grammar and wanted to look for a singular noun phrase.We might do this with the following PUSH arc:(PUSH NP T (UNIFY * '(NP (DET eDET)If we follow the usual schedule of interpreting ATN gra.---rs the unification will not occur until the NP network has found a noun phrase and popped back with a value. This would require a fully symmetric unification operation since there are variables being bound to values in both arguments. It is also highly inefficient since we may know rlghc away that the noun phrase in the input is not singular. What we would iike is to be able to do the unification Just after the push is done, which would more closely parallel a Prolog-based DCG parse.Then an attempt to "unify" the number register with anything other than singular will fall immediately.This could be done automatically if we constrain a network to have only one state which does a pop and place some additional constraints on the forms that can be used as values to be popped. Although we have not explored this idea at any length, it appears to lead co some interesting possibilities.
We have found the use of logical variables and unification to be a powerful technique in parsing natural language. It [s one of the main sources of the strengths of the Definite Clause Grammar formalism.In attempting to capture this technique for an ATN grammar we have come co several interesting conclusions, First, the strength of the DCG comes as much from the skillful encoding of linguistic assumptions about the eventual outcome of the parse as from the powerful tools it relies on.Second, the notion of logical variables (with unification) can be adapted to parsing systems ouside of the theorem proving paradigm.We have successfully adapted these techniques to an ATN parser and are beginning to embed them in an existing parallel bottom-up parser [flnln3] . Third, the full power of unlfication may
Main paper: dcg's provide greater freedom in the range of: structures that can be built in the course of analyzing a constituent. [n particular the DCG formalism makes it easy to create structures that do not follow the structure implied by the rules of a conscltuenc and easy Co create a structure for a constituent thac depends on items not yec encountered in the sentence. The flrsC two points have been discussed in the past whenever the ATN formalism is compared with a rule-based grammar (see [PracC] , [Heldorn] , [Codd] , or [Bates] ).The outcome of such discussions vary.It is safe Co say chat how one feels about these points depends quite heavily on past experience in using the two formalisms.We find the third point co be well founded, however.Ic is clear chac the DCG differs moeC from previous rule-baaed parsing systems in ice inclusion of Logical variables.These We have built an experimental ATN system which can crest ATN registers as Logical variables and, we feel, capture these important strengths offered by the DCG formalism in the ocherwlse standard ATN formalism.The second section gives a more detailed desctpCton of DCG's and presents a simple grammar. In the third section we show am ATN grammar which is "equivalent" to the DCC grammar and discuss the source of Its awkwardness.The fourth section chert presence an ATN formalism extended co include viewing ATN registers as Logical variables which are subject to the standard unlficacloa operaclon. The final section concludes this note and suggests that logical variables might be fruitfully introduced into ocher parsing algorithms and systems.rip(X, Pl, P) -> dec(X, P2, PI, P), n(X, P3), relclauee(X, P3, P2).rip(X, P, P) -> name(X).vp(X, P) -> tranev(X, Y, Pl), np(Y, Pl, P). vp(X, P) -> tncransv(X, P). Figure 2 gives a sentence in the language recognized by thls grammar together wlth the associated surface syntactic structure and the semantic structure built by the grammar.)) -> [chat], vp(X, P2). relclauae(X, P, P) -> [], dec(X, Pl, P2, (ForAll X (-> P! P2))) -> [everyl. dec(X, Pl, P2, (ForSome X (And Pt P2))) -> [a]. n(X, (man X)) -> [,u.]. n(X, (woun X)) -> [wom~.]. n(X, (dog X)) -> [dog]. name(John) -> [John] name(mary) -> [mary] namI(fldo) -> [fido] transv(X, Y, (loves X Y)) -> [loves]. transv(X, Y, (breaches X Y)) -> [breathesl. Incranev(X, (loves X) -> [loves]. lncransv(X, (lives X).-> [lives]. incranev(X, (breathes X) -> [breathes].A Sentence, Structure and Representation SENTENCE "John loves every woman who breathes"The way in which unification produces the appropriate bindings for this example ls actually quite subtle, and requires a detailed analysis of the parse, as represented by the refutation graph in Figure 3 .For the the refutation graph the Prolog clauses have been put into claueal normal form.Some liberties have been taken with the ordering of the predicates in the interest of compactness.In trying to establish the "s(P)" goal, the "np(X,Pt,P)" is first attempted.The "PI" is an empty variable that is a "place-holder" for predicate information chat will come from the verb. It will "hold" a place in the sentence structure that will be provided by =he determiner. "P" is destined to contain the sentence structure. The ~/ (P2 is bound Co "breathes(Y)") first "np" clause will be matched, but it will eventually fall since no determiner is present. The second "rip" clause will'succeed, having forever identified the contents of "Pl" with the contents of "P, " whatever they may be.Since there is no determiner in the first noun phrase, there is no quantification information.The quantlflcatlonal structure must be supplied by the verb phrase, so the structure for the sentence will be the same as the structure for the verb phrase. The variable "X" will be bound to "John".In trying co establish "vp(John,Pl), " the first "wp" clause w(ll succeed, since "loves" is a transitive verb. It is important not to get the variables confused.Within the "vp" clause our original "Pl" has been renamed "P" and and we have a new "PI" variable that will be Instantlated to "(loves John Y)" by the success of the "=canny" goal. The "Y" Is as yet undetermined, but we can see that It will be supplied by the next "np(Y,(loves John ¥),P)" goal. It shows great foresight on "transv's" part to pass back a variable in such a way that it will correspond to a variable that has already been named.This pattern is repeated throughout the grammar, with powerfull repurcusslons.It is even clearer In the success of the "np(Y,(loves John Y),P)" goal, where the presence of the determiner "every" causes "P" to be bound to (Forall Y (-> PI (loves John Y))This "P" is of course the "P" mentioned above which has been waiting for the verb phrase to supply It with a quantlflcatlonal structure.As the relative clause for this "up" is processed, the "PI" embedded in this structure, (our second new PII), is eventually bound to "(And (woman Y) (breaches Y))" giving us the full structure:(Forall Y (-> (And (woman Y) (breaches Y)) (loves John Y)))This is whac is returned as the binding to the first "Pl" in the original "vp(X,Pt)" goal. Since our "np(X,P[,F)" goal identified "P" wlth "Pl, " our "s(P)" goal succeeds with the binding of (Forall Y (=> (And (woman Y) (breathes Y)) (loves John Y)))for "P" -the final structure built for the sentence.In following the execution of this grammar it becomes clear that very ~trong predictions are made about which parrs of the parse will be supplying particular ~ypes of information.Determiners will provide the quanClElers for the propositional ~tructure of the sentence, the flrsc noun phrase and the noun phrase following the verb will be the two participants in ~he predicate implied by the verb, etc. Obviously this is a simple grammar, but the power of the logical variables can only be made use of through the encoding of these strong linguistic assumptions. DCG's seem to provide, a =echanlsm well qualified for expressing such assumptions and then executing them. Coming up with the assumptions in the first place Is, of course, something of a major task In itself. Figure 4 shows an ATN grammar which is the "equivalent" of the DCG grammar given in Figure t . The format used to specify the grammar is the one described in [flninl] and [finln2] . There are only two minor ways that this particular formalism differs from the standard ATN formalism described in [WoodsY0] or [Bates] . First, the dollar sign Character (i.a. $) followed by the name of a register stands for the contents of that register. Second, the function DEFATN defines a set of arcs, each of which is represented by a llst whose first element is the name of the state and whose remaining elements are the arcs emanating from the state.In addition, this example uses a very simple lexical manager in which a word has (1) a set of syntactic categories to which It belongs (2) an optional set of features and (3) an optional root form for the word. These attributes are associated with a word ualng the function LEX, which supplies appropriate default values for unspecified arguments.In the standard ATN model, a PUSH arc invokes a sub-computatlon which takes no arguments and, if successful, returns a single value.One can achieve the affect of passing parameters to a sub-computatlon by giving a register an initial value via a SENDR register setting action. There are two methods by which one can achieve the effect of returning more than one value from a sub-computatlon. The values to be returned can be packaged into a llst or the LIFTR register setting action can be used to directly set values in the higher level computation.This grammar makes use of SENDR and LIFTR to pass parameters into and ouC of ATN computations and thus the actions of the DCC example.Consider what must happen when looking for a noun phrase.The representation for a NP will be a predicate if the noun phrase is indefinite (i.e. "a man" becomes (man X)) or a constant If the noun phrase is a name (l.e. "John" becomes John). in this simple language, a NP is dominated by a either a sentence (if it is the subject) or by a verb phrase (if It ts the object).[n either case, the NP also determines, or must agree with, the overall Similarly, when we are lookzn8 for a verb phrase, we must know what token (i.e. variable name or constant) represents the subject (if the verb phrase is dominated by a S) or the head noun (if the verb phrase acts as a relative clause). This is done by sanding the subJvar register in the sub-computation the appropriate value via the SENDR function.The techniques used to quancificatlon and build an overall sentence structure in chls ATN grammar are similar co those used in th~ BBN Lunar Grammar [Woods72] .This heavy use of SENDR and LIFTR co communicate between levels in the grammar makes the ATN grammar cumbersome and difficult to unaerstand. In the next secton we investigate treating ATN registers as logic variables and providing a unification operation on them. replacing atn registers with atn variables: Although the previous &TN grammar does the Job, it is clearly awkward.We can achieve much of the elegance of the DCG example by treating the ATN registers as logical variables and including a unification operation on them.We will call such registers ATN Variables. A symbol preceded by a "$" represents an ATN Variable and "*" will again stand for ~he current constituenE.Thus in the state S in the grammar:(S (PUSH NP (UNIFY "($SUBJVAR gYP $S) *) (TO S/SUBJ))) the parser pushes to the state NP co parse a noun phrase.If one is found, it will pop back wi~h a value which will then be unified wi~h the expression (SSUBJVAR $VF $S).If this unification is successful, the parser will advance to state S/SUBJ.If It fails, the arc is blocked causing the parser to backtrack into the NP network.Although our grammar succeeds in mimicking the behavlour of the DCG, there are some open questions Involvlng the use of unification [n parsing natural languages.An examination of ~his ATN grammar shows that we are really using unification as a method of passing parameters.The full power of unlficatton ls noc needed In this example since the 67 grammar does not try to find "most-general unifiers" for complicated sets of terms.Most of the time it is simply using unification to bind a variable to the contents of another variable. The most sophisticated use involves binding a variable in a term to another copy of that term which also has a variable to be bound as in the "a man loves a woman" example in Figure 6 .But even this binding is a simple one-way application of standard unification.St is not clear to the authors whether this is due to the simple nature of the grammars involved or whether it is an inherent property of the dlrectedneee of natural language parsing.A situation where full unification eight be required would arise when one is looking for a constituent matching some partial description.For example, suppose we were working with a syntactic grammar and wanted to look for a singular noun phrase.We might do this with the following PUSH arc:(PUSH NP T (UNIFY * '(NP (DET eDET)If we follow the usual schedule of interpreting ATN gra.---rs the unification will not occur until the NP network has found a noun phrase and popped back with a value. This would require a fully symmetric unification operation since there are variables being bound to values in both arguments. It is also highly inefficient since we may know rlghc away that the noun phrase in the input is not singular. What we would iike is to be able to do the unification Just after the push is done, which would more closely parallel a Prolog-based DCG parse.Then an attempt to "unify" the number register with anything other than singular will fall immediately.This could be done automatically if we constrain a network to have only one state which does a pop and place some additional constraints on the forms that can be used as values to be popped. Although we have not explored this idea at any length, it appears to lead co some interesting possibilities. conclusions: We have found the use of logical variables and unification to be a powerful technique in parsing natural language. It [s one of the main sources of the strengths of the Definite Clause Grammar formalism.In attempting to capture this technique for an ATN grammar we have come co several interesting conclusions, First, the strength of the DCG comes as much from the skillful encoding of linguistic assumptions about the eventual outcome of the parse as from the powerful tools it relies on.Second, the notion of logical variables (with unification) can be adapted to parsing systems ouside of the theorem proving paradigm.We have successfully adapted these techniques to an ATN parser and are beginning to embed them in an existing parallel bottom-up parser [flnln3] . Third, the full power of unlfication may introduction: Logic based programming systems have enjoyed an increasing popularity in applied AI work in the last few years.One of the contributions to Computational Linguistics made by the Logic Programming Paradigm has been the Deflnite Clause Grammar.An excellent introduction to this formalism can be found in [Perelra] in which the authors present the formalism and make a detailed comparison to Augmented Transition Networks as a means of both specifying a language and parsing sentences in a language.We feel Chat the major strengths offered by the DCG formalism arise from its use of Logical variables with Unification as the fundamental operation on them. These techniques can be abstracted from the theorem proving paradigm and adapted to other parsing systems (see [Kay] and [Bossie] ). We have implemented an experimental ATN system which treats ATN registers as Logic variables and provides a unification operation over them.The DCG formalism provides a powerful mechanism for parsing based on a context free grammar.The grammar rule S -> NP VP can be seen as the universally quantified logical statement,For all x, y, and z : N'P(x) /\ VP(y) /\ Concatenate(x,y,z) -> S(z).where "x" and "y" represent sequences of words which can be concatenated together to produce a sentence, "S." Prolog, a progra~mulng language baaed on predicate calculus, allows logical statements to be input as Horn clauses in the foilowlng (reversed) form: s(Z) <-np(X),vp(Y),Concatenate(X,Y,Z).The resolution theorem prover that "interprets" the Prolog clauses would take the oegatlon of S as the goal and try and produce the null clause.Thus the preceding clause can be interpreted procedurally as, "To establish goal S, try and establish subgoals, NP, VP and Concatenate." DCG's provide syntactic sugar on top of Prolog so that the arrow can be reversed and the "Concatenate" predicate can be dispensed with.The words in the input string are looked at sequentially each time a "[Word]" predicate is executed which implicitly tests for concatenation (see figure [ ).DCG's allow grammar rules to be expressed very cleanly, while still allowing ATN-type augmentation through the addition of arbitrary tests on the contents of the variables.and Warren argue that the DCG formalism is well suited for specifying a formal description of a language and also for use with a parser.In particular, they assert that it is a significant advance over an ATN approach on both philosophical and practical grounds.Their chief claims are that:[. DCGs provide a common formalism for theoretlcal work in Computational Linguistics and for writing efficient natural language processors.The rule based nature of a DCG result %n systems of greater clarity and modularity. Appendix:
null
null
null
null
{ "paperhash": [ "finin|an_interpreter_and_compiler_for_augmented_transition_networks", "pratt|lingol:_a_progress_repor", "heidorn|augmented_phrase_structure_grammars", "allen|a_functional_grammar" ], "title": [ "An Interpreter and Compiler for Augmented Transition Networks", "LINGOL: a progress repor", "Augmented Phrase Structure Grammars", "A Functional Grammar" ], "abstract": [ "Abstract : This thesis is intended to both document the implementation of the ATN (Augmented Transition Networks) interpreter and compiler and to serve as a manual for anyone interested in using it. Chapter II gives a brief description of ATN's and discusses some of the high level design considerations. Chapter III describes the interpreter and the auxiliary functions available to the user in some detail. Chapter IV presents the compiler which can translate ATN networks into LISP code or machine language instructions. Chapter V describes the dictionary format expected by the interpreter. Also discussed are the various functions provided for creating and maintaining dictionaries. Chapter VI documents several packages of auxiliary functions provided for interfacing the ATN system with the LISP editor and Prettyprinter. Appendices include a simple parser for English, a sample dictionary, and examples of their operation. An index to function calls and global variables and a bibliography conclude this thesis.", "A new parsing algorithm is described. It is intended for use with advice-taking (or augmented) phrase structure grammars of the type used by Woods, Simmons. Heidorn and the author. It has the property that it is guaranteed not to propose a phrase unless there exists a continuation of the sentence seen thus far, in which the phrase plays a role in some surface structure of that sentence. The context in which this algorithm constitutes a contribution to current issues in parsing methodology is discussed, and we present a case for reversing the current trend to ever more complex control structures in natural language systems.", "Augmented phrase structure grammars consist of phrase structure rules with embedded conditions and structure-building actions written in a specially developed language. An attribute-value, record-oriented information structure is an integral part of the theory.", "Functional Grammar describes grammar in functional terms in which a language is interpreted as a system of meanings. The language system consists of three macro-functions known as meta-functional components: the interpersonal function, the ideational function, and the textual function, all of which make a contribution to the structure of a text. The concepts discussed in Functional Grammar aims at giving contribution to the understanding of a text and evaluation of a text, which can be applied for text analysis. Using the concepts in Functional Grammar, English teachers may help the students learn how various grammatical features and grammatical systems are used in written texts so that they can read and write better." ], "authors": [ { "name": [ "Timothy W. Finin" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "V. Pratt" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "George E. Heidorn" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "H. B. Allen", "M. Bryant" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null ], "s2_corpus_id": [ "58246696", "61900604", "2658668", "150098969" ], "intents": [ [], [], [], [ "methodology" ] ], "isInfluential": [ false, false, false, false ] }
- Problem: The paper aims to compare the advantages of Definite Clause Grammar (DCG) in Computational Linguistics with previous parsing mechanisms like Augmented Transition Networks (ATN). - Solution: The hypothesis is that the use of Logical Variables with Unification as the fundamental operation in DCG provides clear advantages over previous parsing mechanisms like ATN, and these advantages can be captured without the need for a resolution theorem prover.
504
0.009921
null
null
null
null
null
null
null
null
940f0042015afadebd246d91e390b4e071aa973d
5574617
null
Utilizing Domain-Specific Information for Processing Compact Text
This paper Identifies the types of sentence fragments found In the text of two domains: medical records and Navy equipment status messages. The fragment types are related to full sentence forms on the basis of the elements which were regularly deleted. A breakdown of the fragment types and their distributions In the two domains Is presented. An approach to reconstructing the semantic class of deleted elements In the medical records Is proposed which is based on the semantic patterns recognized In the domain. I.
{ "name": [ "Marsh, Elaine" ], "affiliation": [ null ] }
null
null
First Conference on Applied Natural Language Processing
1983-02-01
12
20
null
A large amount of natural language Input, whether to text processing or questlon-answerlng systems, conslsts of shortened sentence forms, sentence nfragmentsn. Sentence fragments are found in informal technical communlcatlons, messages, headlines, and In telegraphic camunlcatlons. Occurrences are characterized by thelr brev lty and Informational nature.In all of these, if people are not restricted to using complete, grammatical sentences, as they are In formal writing situations, they tend to leave OUt the parts of the sentence which they belleve the reader will be able to reconstruct. Thls is especially true if the writer deals wlth a specialized subject matter where the facts are to be used by others in the same field.approaches to such hill-formed,, natural language Input have been followed.The LIFER system [Hendrlx, 1977; Hendrlx, et ai., 1978] and the I~_ANES system [Waltz, 1978] both account for fragmentsIn procedural terms; they Co not require the user to enumerate the types of fragments which will be accepted. The Linguistic Strlng Project has characterlzed the regularly occurring ungrammatical constructions and made them pert of the parsing grammar [Anderson, et el., 1975; Hlrschman and Sager, 1982] . Kwasny and Sondhe!mer (10R1) have used error-handling procedures to relate the Ill-formed input of sentence fragments to well-formed structures. While these approaches differ in the way they determine the structure of the fragments and the deleted material, for the most pert they rely heavily, at some point, on the recognition of semantic word-classes.The purpose of this paper Is to describe the syntactic characteristics of sentence fragments and to lllustrate how the domeln-speclflc Information embodied In the cooCcurrence patterns of the semantic word-classes of a domain can be utilized as a powerful tool for processing a body of compact text, I.e. text that contains a large percentage of sentence fragments,The Nee York Unlversl~y Linguistic String Project has developed a computer program to analyze ccmpact text In special Ized subject areas using a general parsing program and an Engl Ish grammar augmented by procedures speclflc to the subJect areas.In recent years the system has been tailored for computer analysis of free-text medical records, which are characterized by numerous sentence fragments.In the computer-analysis and processing of the medical records, relatlvely few types of sentence fragments sufflced to describe the shortened forlas, a l though such fragments ccmprfsed fully 49% of the natural language input CMarsh and Sager, 1982] .Fragment types can be related to full forms on the basis of the elements which are regularly delirfed. Elements deleted fr~n the fragments are fr~a one or more of the syntactic posltlons: subject, tense, verb, obJect. The six fragment types Identlfled in the set of medical records are shown In Table 1 as types i-Vl.Is not Imedlately obvious ts the fact that they are already known In the ful I grammar as parts of ful let constructions.The fragment types reflect deletions found in syntactically distinguished positions wlthin full sentences, as Illustrated in Table 2 . For e~ample, In normal English, a sentence that contains tense and the verb be can occur as the object of verbs like find (e.g.She found that the sent~ce was ~).In the same environment, as obJect of find, a reduced sentence can occur [n which the tense and verb be have been omitted, as In fragment type I (e.g.She found the sentence ~lllJ;~).In the same manner, other reduced forms reflected in fragment types also represent constructions generally found as ~arts of regular English sentences.The fact that the fragment types can be related to full English forms makes It possible to v Iee thee as Instances of reduced SURJECT-VEI~-(~JECT patterns free which particular components have been deleted. Fragments of type I can be represented as having a deleted tense and verb be, of type II as having a deleted subject, tense, and verb be, etc.This makes it relatively straightforward to add thee to the parslng grammar, and, at the same time, provides a framework for Identifying their semantic content by relating thm to the corresponding full forms.!i ! ! i ~ o .~ ~ ~ ~ - . ~o~ ~ ~ ~ ~ w I-- Z>- (Du 3~ We'~ W f J-r" "~0 i, ~E -CE m~ cn uJ >- ).. I-- z n. u. w ..J I-- ~ ° ~ ~ °° ~ ~ E .- ~ ~ ~ o ~ o _ ~ ~ ~ -- O - = ~ ..J =[ ee ¢ ~ d Z d O Z W 111 I'-- Z *(j ,..J I=. ,,.I m w m + Z N ~4 L~ ~. ,,( + + Z Z ~4 d Z 12. O" Z + + + 2: Z Z U.I e, e~ el*" n zo. I,..-.,.J I,.- .g~ -g mw ~ m L4,4 -> ~-_ > Z -- :=" I'--Z i-" I~The number of fragment types that occur In compact text of different technical domains appears to be relatlvely limited. When the fragment types found In medical records were compared wlth those seen In a smell sample of Navy equipment status messages, five of the slx types found in the medlcal records were also found In the Navy messages.Only one additional fragment type was required to cover the Navy messages.This type appears In Table I records, but much more frequent in Navy messages.In addition, the different sections of the input differ with respect to the ratio of fragments 1-o whole sentences and in the types of fro~ments they contain.For e~unple, the different sections of the medical records that were analyzed (e.g. The deletions which relate fragment types to their full sentence forms fall Into two main classes: (I) those found virtuallyIn all texts and Ill) those speclflc to the domain of the text.Just as the fragment types can be viewed as Incomplete realizations of syntac-Nc S-V-O structures, the semantic patterns In sentence fragments can be considered Incomplete reallzatlons of the semantic S-V-O patterns.In general terms, the structure of Information In technical domains can be specified by a set of semantlc classes, the words and phrases which belong to these classes, and by a speclflcatlon of the pal'~erns these classes enter in'to, l.e. the syntactic relationships among the members of +he classes [Grlshmen, et el., 1982; Sager, 1978] .In +he case of the medical sublenguage processed by the Llngulstlc StTlng Project, the medical subclasses were derlved through techniques of distributional analysis [Hlrschmen and Sager, 1982] . Semantlc S-V-O pet-I'erns were then derived from the comblnatory properties of the medical classes in the text [Marsh and Sager, 1982] ; +he semantic pat~rerns Identified In a text are specific to the domain of +he text.Whlle they serve to formulate sublanguage constraints which rule out incorrect syntactic analyses caused by structural or l exlcal ambiguity/, these relationships among classes can also provide a means by which deleted elements in compact text can be reconstructed. When a fragment Is recognized as an Instance of a given semantic pattern, It Is +hen possible to specify a set of the semantic classes from which the medical sublanguage class of +he deleted element can be selected.On a superflclal level, the deletions of be In fragment types Ic-f and Ilia-b, for example, can be reconstructed on purely syntac~'lc grounds by fllllngIn the l exical Item be. However, It Is also possible to provide further Information and specify the semantic class of the lex lcal Item be by reference to the semantlc S-V-O pat-tern manifested by the occurring subject and object. For e~emple, In type If fragment skin no ~ruotlons, skin has the medical subclass BODYPART, and eruntlons has +he medlcal subclass SIGN/SYMFrrOM. The semantic S-V-O pat-tern In which these classes play a part Is= BODYPART-SHOWVERB-SIGN/SYMPTOM (as In Skln showed no eruntlons).Be can then be assigned the semantic class SHOWVERB. protein ~, type It, enters Into the semantic pal-~ern:TEST-~STVERB-TES13~ESULT and be can be assigned the class TESI~/ERB, which relates a TEST subject wlth a TESllRESULT object. Assigning a semantic class to the reconstructed be maximizes Its Informational content.In addition to reconstructing a dlstlngulshed l exlcal Item, like +he verb be, along with Its semantic classes, It Is also possible to specify the set of semantic classes for a deleted element, even +hough a l exlcal Item Is not Immediately reconstructable.For e~emple, the fragment To recelv9 follc ~,J.~o of Type VI, contains a verb of the PI~/ERB" class and a MEDICATION-obJect, but the subject has b~n deleted.The only semantic pad-tern which permits a verb and object wlth these medical subclasses Is the S-V-O pattern:PATIENT-PTVERB-MEDICATION Through recogn{tlon of the semantic pattern in which +he occurring elements of the fragment play a role, the semantic class PATIENT can be specified for +he deleted subject, p~tlent Is one of the distinguished words In the domain of narrative medical records which are often not explicitly mentloned In the text, although they play a role In the sementlc patterns.The S-V-O relations, of which the fragment i~/pes are Incomplete realizations, form the basis of a procedure which specifies the semantic classes of deleted elements In fragments.Under the best conditions, the set of semantic classes for the deleted form contains only one element.It Is also possible, however, for the set to contain more than one semantic class.For example, the t~fpe la fragment Pain also noted }n hands ~ knees, when regularized to normal active S-V-O word order as noted oaln In hands and knees, has a deleted subject.The Figure I , The choice of one subclass for the deleted element from among elements of the set of possible subclasses Is dependent on several factors. First, properties of paragraph structure of the text place restrictions on the selection of semantic class for a deleted element.The fragment noted oaln In h~ds and knees would select a DOCTOR subject If written In the IMPRESSION or EXAH paragraph of the text, but, In the HISTORY paragraph, a PATIENT or FAMILY subJect could not be excluded. A second factor Is the presence of an antecedent having one of the semantic classes specified for the deleted element. If a possible antecedent having the same sGmsntlc class can be found, subJect to restrlctlons on change of topic and discourse structure, then the deleted element can be filled In by Its antecedent, restricting the sementlc class of the deleted element to that of the antecedent. Hoaever, an antecedent search may not always be successful, since the antecedent may not have been expllc[tly mentioned In the text.The antecedent may be one of a class of distinguished words In the sublanguage, such as natlent and .~, which may not be previously mentioned In the body of the text. Semantic classes can be speclfled for deleted elements In sentence fragments based on these semantic patterns.
null
null
null
null
Main paper: introduction: A large amount of natural language Input, whether to text processing or questlon-answerlng systems, conslsts of shortened sentence forms, sentence nfragmentsn. Sentence fragments are found in informal technical communlcatlons, messages, headlines, and In telegraphic camunlcatlons. Occurrences are characterized by thelr brev lty and Informational nature.In all of these, if people are not restricted to using complete, grammatical sentences, as they are In formal writing situations, they tend to leave OUt the parts of the sentence which they belleve the reader will be able to reconstruct. Thls is especially true if the writer deals wlth a specialized subject matter where the facts are to be used by others in the same field.approaches to such hill-formed,, natural language Input have been followed.The LIFER system [Hendrlx, 1977; Hendrlx, et ai., 1978] and the I~_ANES system [Waltz, 1978] both account for fragmentsIn procedural terms; they Co not require the user to enumerate the types of fragments which will be accepted. The Linguistic Strlng Project has characterlzed the regularly occurring ungrammatical constructions and made them pert of the parsing grammar [Anderson, et el., 1975; Hlrschman and Sager, 1982] . Kwasny and Sondhe!mer (10R1) have used error-handling procedures to relate the Ill-formed input of sentence fragments to well-formed structures. While these approaches differ in the way they determine the structure of the fragments and the deleted material, for the most pert they rely heavily, at some point, on the recognition of semantic word-classes.The purpose of this paper Is to describe the syntactic characteristics of sentence fragments and to lllustrate how the domeln-speclflc Information embodied In the cooCcurrence patterns of the semantic word-classes of a domain can be utilized as a powerful tool for processing a body of compact text, I.e. text that contains a large percentage of sentence fragments,The Nee York Unlversl~y Linguistic String Project has developed a computer program to analyze ccmpact text In special Ized subject areas using a general parsing program and an Engl Ish grammar augmented by procedures speclflc to the subJect areas.In recent years the system has been tailored for computer analysis of free-text medical records, which are characterized by numerous sentence fragments.In the computer-analysis and processing of the medical records, relatlvely few types of sentence fragments sufflced to describe the shortened forlas, a l though such fragments ccmprfsed fully 49% of the natural language input CMarsh and Sager, 1982] .Fragment types can be related to full forms on the basis of the elements which are regularly delirfed. Elements deleted fr~n the fragments are fr~a one or more of the syntactic posltlons: subject, tense, verb, obJect. The six fragment types Identlfled in the set of medical records are shown In Table 1 as types i-Vl.Is not Imedlately obvious ts the fact that they are already known In the ful I grammar as parts of ful let constructions.The fragment types reflect deletions found in syntactically distinguished positions wlthin full sentences, as Illustrated in Table 2 . For e~ample, In normal English, a sentence that contains tense and the verb be can occur as the object of verbs like find (e.g.She found that the sent~ce was ~).In the same environment, as obJect of find, a reduced sentence can occur [n which the tense and verb be have been omitted, as In fragment type I (e.g.She found the sentence ~lllJ;~).In the same manner, other reduced forms reflected in fragment types also represent constructions generally found as ~arts of regular English sentences.The fact that the fragment types can be related to full English forms makes It possible to v Iee thee as Instances of reduced SURJECT-VEI~-(~JECT patterns free which particular components have been deleted. Fragments of type I can be represented as having a deleted tense and verb be, of type II as having a deleted subject, tense, and verb be, etc.This makes it relatively straightforward to add thee to the parslng grammar, and, at the same time, provides a framework for Identifying their semantic content by relating thm to the corresponding full forms.!i ! ! i ~ o .~ ~ ~ ~ - . ~o~ ~ ~ ~ ~ w I-- Z>- (Du 3~ We'~ W f J-r" "~0 i, ~E -CE m~ cn uJ >- ).. I-- z n. u. w ..J I-- ~ ° ~ ~ °° ~ ~ E .- ~ ~ ~ o ~ o _ ~ ~ ~ -- O - = ~ ..J =[ ee ¢ ~ d Z d O Z W 111 I'-- Z *(j ,..J I=. ,,.I m w m + Z N ~4 L~ ~. ,,( + + Z Z ~4 d Z 12. O" Z + + + 2: Z Z U.I e, e~ el*" n zo. I,..-.,.J I,.- .g~ -g mw ~ m L4,4 -> ~-_ > Z -- :=" I'--Z i-" I~The number of fragment types that occur In compact text of different technical domains appears to be relatlvely limited. When the fragment types found In medical records were compared wlth those seen In a smell sample of Navy equipment status messages, five of the slx types found in the medlcal records were also found In the Navy messages.Only one additional fragment type was required to cover the Navy messages.This type appears In Table I records, but much more frequent in Navy messages.In addition, the different sections of the input differ with respect to the ratio of fragments 1-o whole sentences and in the types of fro~ments they contain.For e~unple, the different sections of the medical records that were analyzed (e.g. The deletions which relate fragment types to their full sentence forms fall Into two main classes: (I) those found virtuallyIn all texts and Ill) those speclflc to the domain of the text.Just as the fragment types can be viewed as Incomplete realizations of syntac-Nc S-V-O structures, the semantic patterns In sentence fragments can be considered Incomplete reallzatlons of the semantic S-V-O patterns.In general terms, the structure of Information In technical domains can be specified by a set of semantlc classes, the words and phrases which belong to these classes, and by a speclflcatlon of the pal'~erns these classes enter in'to, l.e. the syntactic relationships among the members of +he classes [Grlshmen, et el., 1982; Sager, 1978] .In +he case of the medical sublenguage processed by the Llngulstlc StTlng Project, the medical subclasses were derlved through techniques of distributional analysis [Hlrschmen and Sager, 1982] . Semantlc S-V-O pet-I'erns were then derived from the comblnatory properties of the medical classes in the text [Marsh and Sager, 1982] ; +he semantic pat~rerns Identified In a text are specific to the domain of +he text.Whlle they serve to formulate sublanguage constraints which rule out incorrect syntactic analyses caused by structural or l exlcal ambiguity/, these relationships among classes can also provide a means by which deleted elements in compact text can be reconstructed. When a fragment Is recognized as an Instance of a given semantic pattern, It Is +hen possible to specify a set of the semantic classes from which the medical sublanguage class of +he deleted element can be selected.On a superflclal level, the deletions of be In fragment types Ic-f and Ilia-b, for example, can be reconstructed on purely syntac~'lc grounds by fllllngIn the l exical Item be. However, It Is also possible to provide further Information and specify the semantic class of the lex lcal Item be by reference to the semantlc S-V-O pat-tern manifested by the occurring subject and object. For e~emple, In type If fragment skin no ~ruotlons, skin has the medical subclass BODYPART, and eruntlons has +he medlcal subclass SIGN/SYMFrrOM. The semantic S-V-O pat-tern In which these classes play a part Is= BODYPART-SHOWVERB-SIGN/SYMPTOM (as In Skln showed no eruntlons).Be can then be assigned the semantic class SHOWVERB. protein ~, type It, enters Into the semantic pal-~ern:TEST-~STVERB-TES13~ESULT and be can be assigned the class TESI~/ERB, which relates a TEST subject wlth a TESllRESULT object. Assigning a semantic class to the reconstructed be maximizes Its Informational content.In addition to reconstructing a dlstlngulshed l exlcal Item, like +he verb be, along with Its semantic classes, It Is also possible to specify the set of semantic classes for a deleted element, even +hough a l exlcal Item Is not Immediately reconstructable.For e~emple, the fragment To recelv9 follc ~,J.~o of Type VI, contains a verb of the PI~/ERB" class and a MEDICATION-obJect, but the subject has b~n deleted.The only semantic pad-tern which permits a verb and object wlth these medical subclasses Is the S-V-O pattern:PATIENT-PTVERB-MEDICATION Through recogn{tlon of the semantic pattern in which +he occurring elements of the fragment play a role, the semantic class PATIENT can be specified for +he deleted subject, p~tlent Is one of the distinguished words In the domain of narrative medical records which are often not explicitly mentloned In the text, although they play a role In the sementlc patterns.The S-V-O relations, of which the fragment i~/pes are Incomplete realizations, form the basis of a procedure which specifies the semantic classes of deleted elements In fragments.Under the best conditions, the set of semantic classes for the deleted form contains only one element.It Is also possible, however, for the set to contain more than one semantic class.For example, the t~fpe la fragment Pain also noted }n hands ~ knees, when regularized to normal active S-V-O word order as noted oaln In hands and knees, has a deleted subject.The Figure I , The choice of one subclass for the deleted element from among elements of the set of possible subclasses Is dependent on several factors. First, properties of paragraph structure of the text place restrictions on the selection of semantic class for a deleted element.The fragment noted oaln In h~ds and knees would select a DOCTOR subject If written In the IMPRESSION or EXAH paragraph of the text, but, In the HISTORY paragraph, a PATIENT or FAMILY subJect could not be excluded. A second factor Is the presence of an antecedent having one of the semantic classes specified for the deleted element. If a possible antecedent having the same sGmsntlc class can be found, subJect to restrlctlons on change of topic and discourse structure, then the deleted element can be filled In by Its antecedent, restricting the sementlc class of the deleted element to that of the antecedent. Hoaever, an antecedent search may not always be successful, since the antecedent may not have been expllc[tly mentioned In the text.The antecedent may be one of a class of distinguished words In the sublanguage, such as natlent and .~, which may not be previously mentioned In the body of the text. Semantic classes can be speclfled for deleted elements In sentence fragments based on these semantic patterns. Appendix:
null
null
null
null
{ "paperhash": [ "marsh|analysis_and_processing_of_compact_text", "grishman|natural_language_interfaces_using_limited_semantic_information", "kwasny|relaxation_techniques_for_parsing_grammatically_ill-formed_input_in_natural_language_understanding_systems", "waltz|an_english_language_question_answering_system_for_a_large_relational_database", "hendrix|developing_a_natural_language_interface_to_complex_data", "herdrix|human_engineering_fcr_applied_natural_language_processing", "hendrix|human_engineering_for_applied_natural_language_processing", "anderson|grammatical_compression_in_notes_and_records:_analysis_and_computation" ], "title": [ "Analysis and Processing of Compact Text", "Natural Language Interfaces Using Limited Semantic Information", "Relaxation Techniques for Parsing Grammatically Ill-Formed Input in Natural Language Understanding Systems", "An English language question answering system for a large relational database", "Developing a natural language interface to complex data", "Human engineering fcr applied natural language processing", "Human Engineering for Applied Natural Language Processing", "Grammatical Compression in Notes and Records: Analysis and Computation" ], "abstract": [ "This paper describes the characteristics of compact text as revealed in computer analysis of a set of physician notes. Computer processing of the documents was performed using the LSP system for natural language analysis. A numerical breakdown of syntactic and semantic patterns found in the texts is presented. It is found that four major properties of compact text make it possible to process the content of the documents with syntactic procedures that operate on full free text.", "In order to analyze their input properly, natural language interfaces require access to domain-specific semantic information. However, design considerations for practical systems -- in particular, the desire to construct interfaces which are readily portable to new domains -- require us to limit and segregate this domain-specific information. We consider here the possibility of limiting ourselves to a characterization of the structure of information in a domain. This structure is captured in a domain information schema , which specifies the semantic classes of the domain, the words and phrases which belong to these classes, and the predicate-argument relationships among members of these classes which are meaningful in the domain. We describe how this schema is used by the various stages of two large natural language processing systems.", "This paper investigates several language phenomena either considered deviant by linguistic standards or insufficiently addressed by existing approaches. These include co-occurrence violations, some forms of ellipsis and extraneous forms, and conjunction. Relaxation techniques for their treatment in Natural Language Understanding Systems are discussed. These techniques, developed within the Augmented Transition Network (ATN) model, are shown to be adequate to handle many of these cases.", "By typing requests in English, casual users will be able to obtain explicit answers from a large relational database of aircraft flight and maintenance data using a system called PLANES. The design and implementation of this system is described and illustrated with detailed examples of the operation of system components and examples of overall system operation. The language processing portion of the system uses a number of augmented transition networks, each of which matches phrases with a specific meaning, along with context registers (history keepers) and concept case frames; these are used for judging meaningfulness of questions, generating dialogue for clarifying partially understood questions, and resolving ellipsis and pronoun reference problems. Other system components construct a formal query for the relational database, and optimize the order of searching relations. Methods are discussed for handling vague or complex questions and for providing browsing ability. Also included are discussions of important issues in programming natural language systems for limited domains, and the relationship of this system to others.", "Aspects of an intelligent interface that provides natural language access to a large body of data distributed over a computer network are described. The overall system architecture is presented, showing how a user is buffered from the actual database management systems (DBMSs) by three layers of insulating components. These layers operate in series to convert natural language queries into calls to DBMSs at remote sites. Attention is then focused on the first of the insulating components, the natural language system. A pragmatic approach to language access that has proved useful for building interfaces to databases is described and illustrated by examples. Special language features that increase system usability, such as spelling correction, processing of incomplete inputs, and run-time system personalization, are also discussed. The language system is contrasted with other work in applied natural language processing, and the system's limitations are analyzed.", "Human engineering features for enhancing the usability of practical natural language systems are described. Such features include spelling correction, processing of incomplete (elliptical) input?, of the underlying language definition through English queries, and their ability for casual users to extend the language accepted by the system through the use of synonyms and peraphrases. All of the features described are incorporated in LJFER, -\"applications-oriented system for creating natural language interfaces between computer programs and casual USERS LJFER's methods for the mroe complex human engineering features presented.", "Human engineering features for enhancing the usabil ity of practical natural language systems a l re described. Such features include spelling correction, processing of incomplete (ell ipt ic-~I) input?, jntfrrog-t ior of th p underlying language definition through English oueries, and ?r rbil.it y for casual users to extrnd the language accepted by the system through the-use of synonyms ana peraphrases. All of 1 h* features described are incorporated in LJFER,-\"n r ppl ieat ions-orj e nlf d system for 1 creating natural language j nterfaees between computer programs and casual USERS LJFER's methods for r<\"v] izir? the mroe complex human enginering features ? re presented. 1 INTRODUCTION This pape r depcribes aspect r of a n applieations-oriented system for creating natural langruage interfaces between computer software and Casual users. Like the underlying researen itself, the paper is focused on the human engineering involved in designing practical rnd comfortable interfaces. This focus has lead to the investigation of some generally neglected facets of language processing, including the processing of Ireomplfte inputs, the ability to resume parsing after recovering from spelling errors and the ability for naive users to input English stat.emert s at run time that, extend and person-lize the language accepted by the system. The implementation of these features in a convenient package and their integration with other human engineering features are discussed. There has been mounting evidence that the current state of the art in natural language processing, although still relatively primitive, is sufficient for dealing with some very real problems. For example, Brown and Burton (1975) have developed a usable system for computer assisted instruction, and a number of language systems have been developed for interfacing to data bases, including the REL system developed by Thompson and Thompson (1975), the LUNAR system of Woods et al. (1972), and the PLANES system ol Walt7 (1975). The SIGART newsletter for February, 1977, contains a collection cf 5? short overviews of research efforts in the general area of natural language interfaces. Tnere has rise been a growing demand for application systems. At SRi's Artificial Irtellugene Center alone, many programs are ripe for the addition of language capabilities, Including systems for data base accessing, industrial automation, automatic programming, deduct ior, and judgmental reasoning. The appeal cf these systems to builders ana users .-'like is greatly enhanced when they are able to accept natural language inputs. B. The LIFER SYSTEM To add …", "~inguistic mechanisms of compression are used when making notes within a context where the objects and meanings are known. Mechanisms of compressidn in medical records for a collaborative study of breast cancer are described. The syntactic devices were mainly deletion of words having a special status in the grammar of the whole language and deletion in particular positions of word+ having a special sta&us in the sublanguage. The deIeted forms are described and sublanguage Qord classes defined. A subcorpus of the medical records was parsed by an existing computer parsing system; a component covering the dele-tion-forms was added to the granunar. Modifications to t,he computer grammar are discussed and the parsing results are summarized." ], "authors": [ { "name": [ "E. Marsh", "N. Sager" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Grishman", "L. Hirschman", "C. Friedman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Kwasny", "N. Sondheimer" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Waltz" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Hendrix", "E. Sacerdoti", "Daniel Sagalowicz", "Jonathan Slocum" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Gary G. Herdrix" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Hendrix" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "B. B. Anderson", "N. Sager" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "17021487", "15843411", "181820", "18227465", "15391397", "59814145", "5436772", "219302397" ], "intents": [ [ "methodology" ], [], [], [], [], [], [ "background" ], [] ], "isInfluential": [ false, false, false, false, false, false, false, false ] }
- Problem: The paper aims to identify and analyze the types of sentence fragments present in two different domains, namely medical records and Navy equipment status messages, and to understand the relationship between these fragments and full sentence forms based on the deleted elements. - Solution: The paper proposes an approach to reconstruct the semantic class of deleted elements in medical records by leveraging semantic patterns recognized within the domain, which can aid in understanding and interpreting the fragmented text more effectively.
504
0.039683
null
null
null
null
null
null
null
null
2fbe42997734010b1dbc9c44a62af29d60aecd16
1044201
null
Knowledge Based Question Answering
The natural language database query system incorporated in the KNOBS interactive planning system comprises a dictionary driven parser, APE-II, and script interpreter which yield a
{ "name": [ "Pazzani, Michael J. and", "Engelman, Carl" ], "affiliation": [ null, null ] }
null
null
First Conference on Applied Natural Language Processing
1983-02-01
17
39
null
null
null
conceptual dependency conceptualization as a representation of the manning of user input. A conceptualization pattern matching production system then determines and executes a procedure for extracting the desired information from the database.In contrast to syntax driven Q-A systems, e.$., those based on ATH parsers, AFE-II ia driven bottom-up by expectations associated with word ~eanings. The procesain K of a query is based on the contents of several knowledge sources including the dictionary entries (partial conceptualizations and their expectations), frames representing conceptual dependency primitives, scripts which contain stereotypical knowledge about planning tasks used to infer states enabling or resulting from actions, and two production system rule bases for the inference of implicit case fillers, and for determining the responsive database search. The goals of this approach, all of which are currently at least partially achieved, include utilizing similar representations for questions with similar meanings but widely varying surface structures, developing a powerful mechanism for the disambiguatiou of words with multiple meanings and the determination of pronoun referents, answering questions which require inferences to be understood, and interpreting ellipses and unBra--natical utterances.The KNOBS [Engelman, 1980] demonstration system is an experimental expert system providing consultant services to an Air Force tactical air mission planner. The KNOBS database consists of several nets of frames, implemented within an extension of FRL [Roberts, 1977] , representing both individual and generic classes of targets, resources, and planned missions. The KNOBS system supports a planner by checking the consistency of plan components, enumerating or ranking possible choices for plan components, or automatically generating a complete plan.Because these activities are accomplished by means of rules and constraints expressible in English, KNOBS will hopefully be a relatively easy system to learn.For the same reasons, it is also being considered as an aid to train mission planners.The natural language subsystem of KNOBS plays several roles including those of database query, database update, co~uand language, plan definition, and the addition or modification of production system rules representing domain knowledge. The moat developed of these is database query, upon which this paper will focus.The balance of this paper will first outline the use of conceptual dependency and mention some prior related work and then describe the several knowledge sources and the parts they play in the parsing of the input query. Finally, it will describe the method of deriving the appropriate database search and output response as well as a script-based approach to interpretting COmmands. USE OF CONCEPTUAL DEPENDENCY APE-If utilizes Conceptual Dependency theory [Schank, 1972] to represent the meaning of questions.Once the meaning of a question has been found, the question is answered by a rule based system whose teats are CD patterns and whose actions execute database queries.We feel it is important to represent the meaning in this manner for several reasons.First, the canonical meaning representation enables questions which have different surface expressions, but the same meanins, to be answered by the same mechanikm. This is not only of theoretical sisnificance, but is also a practical matter as it requires less effort to produce a robust system.what they mean, inferences may be required to explicate missing information. This inference process can also utilize the canonical meaning representation.Finally, finding the referent Qf a nominal which is modified by a relative clause is, in some cases, similar to question answering although the syntactic constructions used differ. As a result of this similarity, the question answering productions can also be used for determining the referents of a relative clause. The conversation with KNOBS (whose database is fictional) in Fig. 1 illustrates these points.The first question is represented in the same manner as "Does Ramstein have F-4G's?" and would be answered by the same rule. The second question, after resolving the pronominal reference, requires an inference to find the location from which the F-4G's will be leaving.This inference states that if the source of the object of a physical transfer is missing, then the source could be the initial location of the object.The third question can be thought of as two questions: "Which SCL (Standard Configuration Load -a predefined weapons package) are carried by an F-dC?" and "Which of those contain ECM (Electronic Counter Measures -radar jamming equipment)?".The first part requires a script based inference:In order for an SCL to be carried by an aircraft, the aircraft must be capable of having the SCL as a part.After the first part is answered as a question, the second part is answered as a second question to discover which contain ECM.The system of representation used for nominals (or picture producers) differs from that normally present in a CD system. Typically, an object such as an F-4C would be represented as a picture producer with a TYPE case filled by VEHICLE, a SUBTYPE case filled by aircraft, and, perhaps, a MODEL case filled by F-4C.In KNOBS, the meaning representation produced by the parser is F-dC, the name of a frame.The set membership of this frame is indicated by links to other frames.F-dC is a kind of FIGHTER which is a kind of AIRPLANE which is an AIRCR~T which is a VEHICLE which is a PICTURE PRODUCER. We feel that representing nominals in this manner allows a finer degree of discrimination than explicitly labeled cases to denote a conceptual hierarchy.of objects in the database (which are stored as value facets of slots in FRL) are represented as kinds of RELATIONS in the KNOBS system.For example, the representation of "Hahn's Latitude" is (LATITUDE ARGUMENT (HAHN)). Note, however, chat the representation of "Hahn's aircraft" is (AIRCRAFT LOC (AT PLACE (HAHN))).We would like to distinguish the KNOBS natural language facility from such familiar natural language query systems as LADDER [Hendrix, 1978] and LUNAR [Woods, 1972] in both function and method.The functional model of the above systems is that of someone with a problem to solve and a database containing information useful in its solution which he can access via a natural language interface.KNOBS, by contrast, integrates the natural language capability with multi-faceted problem solving support including critiquing and Benerating tactical plans. Our approach differs in method from these previous systems in its bottom-up, dictionary driven parsing which results in a canonical representation of the meaning of the query, its ability to perform context dependent inferences with this representation during question answering, and the use of a declarative representation of the domain to assist parsin S, question answering, plan updating, and inferencing.A system similar to APE-If in both its diccionarydriven approach to parsins and ice direct attack on word sense disambiguation is the Word Expert Parser (WEP) [Small, 1980] . This parser associates a discrimination net with each word to guide the meanin 8 selection process. Each word in a sentence is a pointer to a coroutine called a word expert which cooperates with neighboring words to build a meanin S representation of the sentences in a bottom-up, i.e., data driven, fashion.At each node in the discrimination net a multiple-choice test is executed which can query the lexical properties or expectations, (selectional restrictions [Katz, 1963] ) of neighboring words, or proposed FOCUS, ACTIVITY, and DISCOURSE modules.The sense selection process of WEP requires that each word know all of the contexts in which its senses can occur.For example, to find the meaning of "pit", the pit expert can ask if a MINING-ACTIVITY, EATING-ACTION, CAR-RACINC, or MUSIC-CONCERT-ACTION is active.Experiment), a parser used by the DSAM (Distributable Script Applying Mechanism) and ACE (Academic Counseling Expert) projects at the University of Connecticut [Cullingford, 1982] . APE is based on the CA parser [Birnbaum, 1981] with the addition of a word sense disambiguation algorithm.In CA, word definitions are represented as requests, a type of test-action pair.The test part of a request can check lexical and semantic features of neighboring words; the actions create or connect CD structures, and activate or deactivate other requests.The method available to select the appropriate meaning of a word in CA is to use the test part of separate requests to examine the meanings of other words and co build a meaning representation as function of this local context.For example, if the objeet of "serve" is a food, the meaning is "bring to"; if the object is a ball, the meaning is "hit toward". This method works well for selecting a sense of a word which has expectations.However, some words have no expectations and the intended sense is the one that is expected. For example, the proper sense of "ball" in "John kicked the ball." and "John attended the ball." is the sense which the central action expects.The word definitions of APE are also represented as requests. A special concept called a VEL is used to represent the set of possible meanings of a word.When searching for a concept which has certain semantic features, an expectation can select one or more senses from a VEL and discard those that are not appropriate.In addition, APE can use expectations from a contextual knowledge source such as a script applier to select a word sense.Each script is augmented with parser executable expectations called named requests.For example, aCa certain point in understanding a restaurant story, leaving • tip for the waiter is expected.The parser is then given a named request which could help disambiguate the words "leave" and "tip", should they appear.A word definition in APE-II consists of the set of all of its senses.Each sense contains • concept, i.e., • partial CD structure which expresses the meaning of this sense, and a set of conceptual and lexical expectatious.A conceptual expectation instructs the parser to look for a concept in s certain relative position which meets a selectional restriction. The expectation also contains a selectional preference, a more specific, preferred category for the expected concept (cf. [Wilkg, 1972] ).If such a concept is found, the expectation contains information on how it can be combined with the concept which initiated the expectation.A lexical expectation instructs the parser to look for a certain word and add a new, favored sense to it. This process is useful for predicting the function of a prepositiou [Reisbeck, 1976] .The definition of a pronoun utilizes a context and focus mechanism co find the set of possible referents which agree with it in number and gender. THE PRONOUN IS THEN TREATED LIKE A WORD WITH MULTIPLE SENSES. The definitions of the words "fly", "eat" and "A/C" are shown in Fig. 2 .The definition of "A/C" states that it means AIRCRAFT or AIR-CONDITIONER. APE-If uses selectional restrictions to choose the proper sense of "A/C" in the question "What A/C can fly from Hahn?". On the other hand, in the sentence "Send 4 A/C to BE70701.", APE-II utilizes the facts that the OCA script is active, and that sending aircraft to a target is a scene of that script, Co determine that "A/C" means AIRCRAFT.In the question "What is an A/C?", APE-II uses a weaker argument to resolve the potential ambiguity.It utilizes the fact that AIRCRAFT is an object that can perform a role in the OCA script, while an AIR-CONDITIONER cannot.The definition of "fly" states that it means FLY which is a kind of physical transfer. The expectations associated with fly state the actor of the sentence (i.e., a concept which precedes the action in a d~clarative sentence, follows "by" in a passive sentence, or appears in various places in questions, etc.) is expected to be an AIRCRAFT in which case it is the OBJECT of FLY or is expected to be a BIRD in which case it is both the ACTOR and the OBJECT of the physical transfer. This is the expectation which can select the intended sense of "A/C". If the word "~o" appears, it might serve the function of indicating the filler of the TO case of FLY.The word "from" is given a similar definition, which would fill the FROM case with the object of the preposition which :should be a PICTURE-PRODUCER but is preferred to be a LOCATION.The definition of "eat" contains an expectation with s selectional preference which indicates that the object is preferred to be food.This preference serves another purpose also. The object will be converted to a food if possible. For example, if the object were "chicken" then this conversion would assert that it is a dead and cooked chicken.We vili first discuss the parsing process as if sentences could be parsed in isolation and then explain how it is augmented to account for context. The simplified parsing process consists of adding the senses of each word to an active memory, considering the expectations, and removin E concepts (senses) which are not connected to other concepts.Word sense disambiguation and the resolution of pronominal references are achieved by several mechanisms. Selectional restrictions can be helpful to resolve m-biguities.For example, many actions require an animate actor.If there are several choices for the actor, the inanimate ones will be weeded out.Conversely, if there are several choices for the main action, and the actor has been established as animate, then ~hose actions which require an inanimate actor will be discarded. Selectional preferences are used in addition to selectioual restrictions.For example, if "eat" has an object which is a pronoun whose possible referents are a food and a coin, the food will be preferred and the coin discarded as a possible referent.A conflict resolution mechanism is invoked if more than one concept satisfies the restrictions and preferences.This consists of using "conceptual constraints" to determine if the CD structure which would be built is plausible. These constraints are predicates associated with CD primitives.For example, the locational specifier INSIDE has a constraint which states that the contents must be smaller than the container.The disnmbiguation process can make use of the knowledge structures which represent stereotypical domain information.The conflict resolution algorithm also determines if the CD structure which would be built refers to a scene in an active script and prefers to build this type of conceptualization.At the end of the parse, if there is an ambiguous nominal, the possibilities are matched against the roles of the active scripts. Nominals which can be a script role are preferred.A planned extension to the parsing algorithm consists of augmenting the definition of a word sense with information about whether it is an uncommonly used sense, and the contexts in which i¢ could be used (see [Charniak, 1981] ). Only some senses will be added to the active memory and if none of those concepts can be connected, other senses will be added. A similar mechanism can be used for potential pronoun referents, organizing concepts according to implicit or explicit focus in addition to their location in active or open focus spaces (see [Grosz, 1977] ).Another extension to APE-II will be the incorporation of a mechanism similar to the named requests of APE. However, because the expectations of APE-II are in a declarative format, it is hoped that these requests can be generated from the causally linked scenes of the script.After the meaning of a question has been represented, the question is answered by means of pattern-invoked rules.Typically, the pattern matching process binds variables to the major nominals in a question conceptualization. The referents of these nominals are used in executing a database query which finds the answer to the user's question.Although the question conceptualization and the answer could be used to generate a natural language response [Goldman, 1975] , the current response facility merely substitutes the answer and referents in a canned response procedure associated with each question answering rule.The question answering rules are organized according to the context in which they are appropriate, i.e., the conversational script [Lehnert, 1978] , and according to the primitive of the conceptualization and the "path to the focus" of the question.The path to the focus of a question is considered to be the path of conceptual cases which leads to the subconcept in question.A question answering production is displayed in Fig. 3 .It is a default pattern designed to answer questions about which objects are at a location.This pattern is used to answer the question "~hat fighters do the airbasee in West Gerlmny have?".In this example, the pattern variables &LOC is bound to the meaning representation of "the airbases in West Germany" and &OBJECT is bound to the meaning representation of "fighters".The action is then executed and the referent of &OBJECT is found to be (FIGHTER) and the referent of &LOC is found to be (HAHN SEMBACH BITBURG).The fighters at each of these locations is found and the variable ANSWER is bound to the value of MAPPAIR:((HAHN . (F-4C F-15)) (SEMBACH . NIL) (BITBURG . (F-~ F-15))).The response facet of the question answering production reformats the results of the action to merse locations with the same set of objects.The answer "There are none at Sembach.Hahn and Bitburg have F-4Cs and F-15s." is printed on successive iteratione of PMAPC.The production in Fig. 3 is used to answer most questions about objects aC a location.It invokes a general function which finds the subset of ~he parts of a location which belong to a certain class. The OCA (offensive counter air) script used by the KNOBS system contains a more specific pattern for answering question about the defenses of a location.This production is used to answer the question "What SAMe are at BE70701?". The action of this production executes a procedure which finds the subset of the surface to air missiles whose range is greater than the distance to the location. In addition to executing a database query, the action of a rule can racureively invoke other queJCion answering rules.For example, to answer the question '*Row many airbasaJ have F-At'e?", a general rule converts the conceptualization of the question to that of '~hich airbaees have F-At°e? " and counts the result of answering the larger.The question answering rules can also be used to find the referent of complex nominals such as "the airbases which have F-AC'e". The path to the focus of the "question" is indicated by the conceptual case of the relative pronoun.when important roles are not filled in a concept, "conceptual completion" inferences are required to infer the fillers of conceptual cases. Our conceptual completion inferences are expressed as rules represented and organized in a manner analogous to question answering rules.The path to the focus of a conceptual completion inference ie the conceptual case which it is intended co explioate.Conceptual completion inferences are run only when necessary, i.e., when required by the pattern m4tcher to enable a question answering pattern (or even another inference pattern) to match successfully, An example conceptual completion inference is illustrated in FiE. 4. It is designed to infer the missing source of a physical transfer.The pattern binds the variable &OBJECT co the filler of the OBJECT role and thq action executes a function which looks at the LOCATION case of &OBJECT or checks the database for the known location of the referent of &OBJECT.This inference would not be used in processin E the question "Which aircraft at Ramstein could reach the target from Hahn?" because the source has been explicitly stated.It would be used, on the other hand, in processing the question, "Which aircraft at Ramstein can reach the target?".Its effect would be to fill the FROM slot of the question conceptualization with RAMSTEIN. If a question answering production cannot be found to respond to a question, and the question refers Co a scene in an active script, causal inferences are used CO find an answerable question vhich can be constructed as a state or action ~upliad by the original question. These inferences are represented by causal links [CullinKford, 1978] which connect the lCltel and actions of a stereotypical situation.The causal links used for this type of inference are RESULT (actions can result in state changes), ENABLE (states can enable action), and EESULT-ENA3LE (an action results in a state which enables an action).This last inference is so coumon that it is given a special link.In soma cases, the intermediate state is unimportant or unknown. In addition to causal links, temporal links are also represented to reason about the sequencing of actions.The causal inference process consists of locating a script paCtern of an active script which represents the scene of the script referred to by a question.The pattern matchfnE algorithm assures that the constants ~n the pattern are a super-class of the constants in the conceptual hierarchy of FRL frames. The variables in script patterns are the script roles which represent the common objects and actors of the script. The binding of script roles to subconcepts of a question conceptualization is subject to the recursive matching of patterns which indicate the common features of the roles. (This will be explained in more detail in the section on interactive script instantiation.)After the scene referenced by the user question is identified, a new question concept is constructed by substituting role bindings into patterus representing states or actions linked to the identified scene. It results in the aircraft being over the target which enables the aircraft to attack the target.The script pattern At-HIT-TARGET represents the propelling of a weapon toward the target. It results in the destruction of the target, and is followed by the aircraft flying back Co the airbase.The knowledge represented by these script patterns is needed to answer the question "What aircraft at Hahn can strike BE70701?". The answer produced by KNOBS, "Y-15s can reach BE70701 from Hahn.", requires a causal inference and a concept completion inference.The first step in producing this answer is to represent the meaning of the sentence.The conceptualization produced by APE-If is shown in Fig. 6a .A search for a question answering pattern to answer this fails, so causal inferences are tried. The question concept is identified Co he the AC-HIT-TARGET scene of the 0CA script, and the scene which RESULT-ENABLEs it, AC-FLY-TO-TARGET is instantiafied.This new question conceptualization is displayed in Fig 6b. A question answering pattern whose focus is (OBJECT IS-A) is found which could match the inferred question (Fig. 6c ). To enable this pattern to match the inferred question, the FROM case must be inferred.This is accomplished by a concept completion inference which produces the complete conceptualization shown in Fig. 6d .Finally, the action and response of the question answering are executed to calculate and print ~n answer.The script patterns which describe the relationships among the scenes of a situation are also used by the KNOBS system to guide a conversation about that domain.The conversation with KNOBS in Fig. 7 illustrates the entering of plan components by interactively insCantiating script patterns.The first user sentence instantiaces two script patterns (the flying of aircraft, and the striking of a target) and binds the script roles: TARGET Co BE70501, WING to 109TFW, AIRCRAFT-NUMBER to 4, and TIME-OVER-TARGET to 0900. KNOB~ asks the user to select the AIRCRAFT.Because the user replied with a question whose answer is an aircraft, KNOBS asks if the user would like would like to use chat aircraft am a component of the developing plan. This is accomplished by a rule that is activated when KNOBS asks the user to specify a plan component. The interpretation of the user s negative answer is handled by s rule activated when KNOBS asks a yes-no question. KNOBS checks the consistency of the user's answer and explains a constrainc which has failed.Then, the user corrects this problem, and KNOBS processes the extra information supplied by matching the meaning of the user's input to a script pattern. Send 4 aircraft from the Logcfv co sc:iks SE7050L at 0900. Whac aircraft do you vane to use7 What alrcrafc are in the I09TI~T The I09TFW has F-4Cs. WouLd you Like to use F-4Cs for the aircraft?NO, F-4Gs.The 10~r~ does nOC co~tain F-4Gs. F17 the P-4Gs out of the 126TFW st Eamscsia. A script role can be bound by matching against patterns associated with other script roles in addition to matching against script patterns. Fig. 8 shows a role pattern associated with the script role AIRCL~YT. This pattern serves two purposes: to prevent bindings to the script role vhichwould not make sense (i.e., the object which plays the AIRCRAFT role ~st be an aircraft) and to recursively bind other script roles to attached concepts.In this exemple, the AIRBASE or the ~NC could be attached to the AIRCRAFT concept, e.g., "F-4Cs from Hahn" or "F-dCa in the 126TFW".The interactive script interpreter is an alternative to the menu system provided by KNOBS for the entering of important components of a plan Co be checked for consistency.KNOBS also provides a means of automatically finishing the creation of a consistent plan.This can allow an experienced mission planner to enter a plan by typing one or two sentences and hitting a key which tells KNOBS co choose the unspecified components.To demonstrate their domain independence, the KNOBS System and APE-II have been provided with knowledge bases to plan and answer questions about naval "show of flag" missions.This version of KNOBS also uses FRL as a database language.A large portion of the question answering capability was directly applicable for a number of reasons.First of all, dictionary entries for frames are constructed automatically when they appear in a user query.The definitions of the attributes (slots) of a frame which are represented as RELATIONs are also constructed when needed. The definitions of many common words such as "be", "have", "a", "of", etc., would be useful in understanding questions in any domain. The question answering productions and concept completion inferences are separated into default and domain specific categories. Many of the simple but common queries are handled by default patterns. For example, "Which airbases have fighters?" and "What ports have cruisers?" are answered by the same default pattern. Currently, the Navy version of KNOBS has 3 domain specific question answering patterns, compared to 22 in the Air Force version. (There are 46 default patterns.)The most important knowledge structure missing in the Navy domain is the scripts which are needed to perform causal inferences and dialog directed planning. Therefore, the system can answer the question "What weapons does the Nimitz have?", but can't answer '~ihat weapons does the NimiCz carry?".We have argued that the processing of natural languaae database queries should be driven by the meaning of the input, as determined primarily by the emaninss of the constituent words. The zuechanisms provided for word sense selection and for the inference of missing meaning elements utilize a variety of knowledge sources.It is believed Chat this approach will prove more general and extensible than those based chiefly on the surface structure of the natural language query.
null
null
Main paper: : conceptual dependency conceptualization as a representation of the manning of user input. A conceptualization pattern matching production system then determines and executes a procedure for extracting the desired information from the database.In contrast to syntax driven Q-A systems, e.$., those based on ATH parsers, AFE-II ia driven bottom-up by expectations associated with word ~eanings. The procesain K of a query is based on the contents of several knowledge sources including the dictionary entries (partial conceptualizations and their expectations), frames representing conceptual dependency primitives, scripts which contain stereotypical knowledge about planning tasks used to infer states enabling or resulting from actions, and two production system rule bases for the inference of implicit case fillers, and for determining the responsive database search. The goals of this approach, all of which are currently at least partially achieved, include utilizing similar representations for questions with similar meanings but widely varying surface structures, developing a powerful mechanism for the disambiguatiou of words with multiple meanings and the determination of pronoun referents, answering questions which require inferences to be understood, and interpreting ellipses and unBra--natical utterances.The KNOBS [Engelman, 1980] demonstration system is an experimental expert system providing consultant services to an Air Force tactical air mission planner. The KNOBS database consists of several nets of frames, implemented within an extension of FRL [Roberts, 1977] , representing both individual and generic classes of targets, resources, and planned missions. The KNOBS system supports a planner by checking the consistency of plan components, enumerating or ranking possible choices for plan components, or automatically generating a complete plan.Because these activities are accomplished by means of rules and constraints expressible in English, KNOBS will hopefully be a relatively easy system to learn.For the same reasons, it is also being considered as an aid to train mission planners.The natural language subsystem of KNOBS plays several roles including those of database query, database update, co~uand language, plan definition, and the addition or modification of production system rules representing domain knowledge. The moat developed of these is database query, upon which this paper will focus.The balance of this paper will first outline the use of conceptual dependency and mention some prior related work and then describe the several knowledge sources and the parts they play in the parsing of the input query. Finally, it will describe the method of deriving the appropriate database search and output response as well as a script-based approach to interpretting COmmands. USE OF CONCEPTUAL DEPENDENCY APE-If utilizes Conceptual Dependency theory [Schank, 1972] to represent the meaning of questions.Once the meaning of a question has been found, the question is answered by a rule based system whose teats are CD patterns and whose actions execute database queries.We feel it is important to represent the meaning in this manner for several reasons.First, the canonical meaning representation enables questions which have different surface expressions, but the same meanins, to be answered by the same mechanikm. This is not only of theoretical sisnificance, but is also a practical matter as it requires less effort to produce a robust system.what they mean, inferences may be required to explicate missing information. This inference process can also utilize the canonical meaning representation.Finally, finding the referent Qf a nominal which is modified by a relative clause is, in some cases, similar to question answering although the syntactic constructions used differ. As a result of this similarity, the question answering productions can also be used for determining the referents of a relative clause. The conversation with KNOBS (whose database is fictional) in Fig. 1 illustrates these points.The first question is represented in the same manner as "Does Ramstein have F-4G's?" and would be answered by the same rule. The second question, after resolving the pronominal reference, requires an inference to find the location from which the F-4G's will be leaving.This inference states that if the source of the object of a physical transfer is missing, then the source could be the initial location of the object.The third question can be thought of as two questions: "Which SCL (Standard Configuration Load -a predefined weapons package) are carried by an F-dC?" and "Which of those contain ECM (Electronic Counter Measures -radar jamming equipment)?".The first part requires a script based inference:In order for an SCL to be carried by an aircraft, the aircraft must be capable of having the SCL as a part.After the first part is answered as a question, the second part is answered as a second question to discover which contain ECM.The system of representation used for nominals (or picture producers) differs from that normally present in a CD system. Typically, an object such as an F-4C would be represented as a picture producer with a TYPE case filled by VEHICLE, a SUBTYPE case filled by aircraft, and, perhaps, a MODEL case filled by F-4C.In KNOBS, the meaning representation produced by the parser is F-dC, the name of a frame.The set membership of this frame is indicated by links to other frames.F-dC is a kind of FIGHTER which is a kind of AIRPLANE which is an AIRCR~T which is a VEHICLE which is a PICTURE PRODUCER. We feel that representing nominals in this manner allows a finer degree of discrimination than explicitly labeled cases to denote a conceptual hierarchy.of objects in the database (which are stored as value facets of slots in FRL) are represented as kinds of RELATIONS in the KNOBS system.For example, the representation of "Hahn's Latitude" is (LATITUDE ARGUMENT (HAHN)). Note, however, chat the representation of "Hahn's aircraft" is (AIRCRAFT LOC (AT PLACE (HAHN))).We would like to distinguish the KNOBS natural language facility from such familiar natural language query systems as LADDER [Hendrix, 1978] and LUNAR [Woods, 1972] in both function and method.The functional model of the above systems is that of someone with a problem to solve and a database containing information useful in its solution which he can access via a natural language interface.KNOBS, by contrast, integrates the natural language capability with multi-faceted problem solving support including critiquing and Benerating tactical plans. Our approach differs in method from these previous systems in its bottom-up, dictionary driven parsing which results in a canonical representation of the meaning of the query, its ability to perform context dependent inferences with this representation during question answering, and the use of a declarative representation of the domain to assist parsin S, question answering, plan updating, and inferencing.A system similar to APE-If in both its diccionarydriven approach to parsins and ice direct attack on word sense disambiguation is the Word Expert Parser (WEP) [Small, 1980] . This parser associates a discrimination net with each word to guide the meanin 8 selection process. Each word in a sentence is a pointer to a coroutine called a word expert which cooperates with neighboring words to build a meanin S representation of the sentences in a bottom-up, i.e., data driven, fashion.At each node in the discrimination net a multiple-choice test is executed which can query the lexical properties or expectations, (selectional restrictions [Katz, 1963] ) of neighboring words, or proposed FOCUS, ACTIVITY, and DISCOURSE modules.The sense selection process of WEP requires that each word know all of the contexts in which its senses can occur.For example, to find the meaning of "pit", the pit expert can ask if a MINING-ACTIVITY, EATING-ACTION, CAR-RACINC, or MUSIC-CONCERT-ACTION is active.Experiment), a parser used by the DSAM (Distributable Script Applying Mechanism) and ACE (Academic Counseling Expert) projects at the University of Connecticut [Cullingford, 1982] . APE is based on the CA parser [Birnbaum, 1981] with the addition of a word sense disambiguation algorithm.In CA, word definitions are represented as requests, a type of test-action pair.The test part of a request can check lexical and semantic features of neighboring words; the actions create or connect CD structures, and activate or deactivate other requests.The method available to select the appropriate meaning of a word in CA is to use the test part of separate requests to examine the meanings of other words and co build a meaning representation as function of this local context.For example, if the objeet of "serve" is a food, the meaning is "bring to"; if the object is a ball, the meaning is "hit toward". This method works well for selecting a sense of a word which has expectations.However, some words have no expectations and the intended sense is the one that is expected. For example, the proper sense of "ball" in "John kicked the ball." and "John attended the ball." is the sense which the central action expects.The word definitions of APE are also represented as requests. A special concept called a VEL is used to represent the set of possible meanings of a word.When searching for a concept which has certain semantic features, an expectation can select one or more senses from a VEL and discard those that are not appropriate.In addition, APE can use expectations from a contextual knowledge source such as a script applier to select a word sense.Each script is augmented with parser executable expectations called named requests.For example, aCa certain point in understanding a restaurant story, leaving • tip for the waiter is expected.The parser is then given a named request which could help disambiguate the words "leave" and "tip", should they appear.A word definition in APE-II consists of the set of all of its senses.Each sense contains • concept, i.e., • partial CD structure which expresses the meaning of this sense, and a set of conceptual and lexical expectatious.A conceptual expectation instructs the parser to look for a concept in s certain relative position which meets a selectional restriction. The expectation also contains a selectional preference, a more specific, preferred category for the expected concept (cf. [Wilkg, 1972] ).If such a concept is found, the expectation contains information on how it can be combined with the concept which initiated the expectation.A lexical expectation instructs the parser to look for a certain word and add a new, favored sense to it. This process is useful for predicting the function of a prepositiou [Reisbeck, 1976] .The definition of a pronoun utilizes a context and focus mechanism co find the set of possible referents which agree with it in number and gender. THE PRONOUN IS THEN TREATED LIKE A WORD WITH MULTIPLE SENSES. The definitions of the words "fly", "eat" and "A/C" are shown in Fig. 2 .The definition of "A/C" states that it means AIRCRAFT or AIR-CONDITIONER. APE-If uses selectional restrictions to choose the proper sense of "A/C" in the question "What A/C can fly from Hahn?". On the other hand, in the sentence "Send 4 A/C to BE70701.", APE-II utilizes the facts that the OCA script is active, and that sending aircraft to a target is a scene of that script, Co determine that "A/C" means AIRCRAFT.In the question "What is an A/C?", APE-II uses a weaker argument to resolve the potential ambiguity.It utilizes the fact that AIRCRAFT is an object that can perform a role in the OCA script, while an AIR-CONDITIONER cannot.The definition of "fly" states that it means FLY which is a kind of physical transfer. The expectations associated with fly state the actor of the sentence (i.e., a concept which precedes the action in a d~clarative sentence, follows "by" in a passive sentence, or appears in various places in questions, etc.) is expected to be an AIRCRAFT in which case it is the OBJECT of FLY or is expected to be a BIRD in which case it is both the ACTOR and the OBJECT of the physical transfer. This is the expectation which can select the intended sense of "A/C". If the word "~o" appears, it might serve the function of indicating the filler of the TO case of FLY.The word "from" is given a similar definition, which would fill the FROM case with the object of the preposition which :should be a PICTURE-PRODUCER but is preferred to be a LOCATION.The definition of "eat" contains an expectation with s selectional preference which indicates that the object is preferred to be food.This preference serves another purpose also. The object will be converted to a food if possible. For example, if the object were "chicken" then this conversion would assert that it is a dead and cooked chicken.We vili first discuss the parsing process as if sentences could be parsed in isolation and then explain how it is augmented to account for context. The simplified parsing process consists of adding the senses of each word to an active memory, considering the expectations, and removin E concepts (senses) which are not connected to other concepts.Word sense disambiguation and the resolution of pronominal references are achieved by several mechanisms. Selectional restrictions can be helpful to resolve m-biguities.For example, many actions require an animate actor.If there are several choices for the actor, the inanimate ones will be weeded out.Conversely, if there are several choices for the main action, and the actor has been established as animate, then ~hose actions which require an inanimate actor will be discarded. Selectional preferences are used in addition to selectioual restrictions.For example, if "eat" has an object which is a pronoun whose possible referents are a food and a coin, the food will be preferred and the coin discarded as a possible referent.A conflict resolution mechanism is invoked if more than one concept satisfies the restrictions and preferences.This consists of using "conceptual constraints" to determine if the CD structure which would be built is plausible. These constraints are predicates associated with CD primitives.For example, the locational specifier INSIDE has a constraint which states that the contents must be smaller than the container.The disnmbiguation process can make use of the knowledge structures which represent stereotypical domain information.The conflict resolution algorithm also determines if the CD structure which would be built refers to a scene in an active script and prefers to build this type of conceptualization.At the end of the parse, if there is an ambiguous nominal, the possibilities are matched against the roles of the active scripts. Nominals which can be a script role are preferred.A planned extension to the parsing algorithm consists of augmenting the definition of a word sense with information about whether it is an uncommonly used sense, and the contexts in which i¢ could be used (see [Charniak, 1981] ). Only some senses will be added to the active memory and if none of those concepts can be connected, other senses will be added. A similar mechanism can be used for potential pronoun referents, organizing concepts according to implicit or explicit focus in addition to their location in active or open focus spaces (see [Grosz, 1977] ).Another extension to APE-II will be the incorporation of a mechanism similar to the named requests of APE. However, because the expectations of APE-II are in a declarative format, it is hoped that these requests can be generated from the causally linked scenes of the script.After the meaning of a question has been represented, the question is answered by means of pattern-invoked rules.Typically, the pattern matching process binds variables to the major nominals in a question conceptualization. The referents of these nominals are used in executing a database query which finds the answer to the user's question.Although the question conceptualization and the answer could be used to generate a natural language response [Goldman, 1975] , the current response facility merely substitutes the answer and referents in a canned response procedure associated with each question answering rule.The question answering rules are organized according to the context in which they are appropriate, i.e., the conversational script [Lehnert, 1978] , and according to the primitive of the conceptualization and the "path to the focus" of the question.The path to the focus of a question is considered to be the path of conceptual cases which leads to the subconcept in question.A question answering production is displayed in Fig. 3 .It is a default pattern designed to answer questions about which objects are at a location.This pattern is used to answer the question "~hat fighters do the airbasee in West Gerlmny have?".In this example, the pattern variables &LOC is bound to the meaning representation of "the airbases in West Germany" and &OBJECT is bound to the meaning representation of "fighters".The action is then executed and the referent of &OBJECT is found to be (FIGHTER) and the referent of &LOC is found to be (HAHN SEMBACH BITBURG).The fighters at each of these locations is found and the variable ANSWER is bound to the value of MAPPAIR:((HAHN . (F-4C F-15)) (SEMBACH . NIL) (BITBURG . (F-~ F-15))).The response facet of the question answering production reformats the results of the action to merse locations with the same set of objects.The answer "There are none at Sembach.Hahn and Bitburg have F-4Cs and F-15s." is printed on successive iteratione of PMAPC.The production in Fig. 3 is used to answer most questions about objects aC a location.It invokes a general function which finds the subset of ~he parts of a location which belong to a certain class. The OCA (offensive counter air) script used by the KNOBS system contains a more specific pattern for answering question about the defenses of a location.This production is used to answer the question "What SAMe are at BE70701?". The action of this production executes a procedure which finds the subset of the surface to air missiles whose range is greater than the distance to the location. In addition to executing a database query, the action of a rule can racureively invoke other queJCion answering rules.For example, to answer the question '*Row many airbasaJ have F-At'e?", a general rule converts the conceptualization of the question to that of '~hich airbaees have F-At°e? " and counts the result of answering the larger.The question answering rules can also be used to find the referent of complex nominals such as "the airbases which have F-AC'e". The path to the focus of the "question" is indicated by the conceptual case of the relative pronoun.when important roles are not filled in a concept, "conceptual completion" inferences are required to infer the fillers of conceptual cases. Our conceptual completion inferences are expressed as rules represented and organized in a manner analogous to question answering rules.The path to the focus of a conceptual completion inference ie the conceptual case which it is intended co explioate.Conceptual completion inferences are run only when necessary, i.e., when required by the pattern m4tcher to enable a question answering pattern (or even another inference pattern) to match successfully, An example conceptual completion inference is illustrated in FiE. 4. It is designed to infer the missing source of a physical transfer.The pattern binds the variable &OBJECT co the filler of the OBJECT role and thq action executes a function which looks at the LOCATION case of &OBJECT or checks the database for the known location of the referent of &OBJECT.This inference would not be used in processin E the question "Which aircraft at Ramstein could reach the target from Hahn?" because the source has been explicitly stated.It would be used, on the other hand, in processing the question, "Which aircraft at Ramstein can reach the target?".Its effect would be to fill the FROM slot of the question conceptualization with RAMSTEIN. If a question answering production cannot be found to respond to a question, and the question refers Co a scene in an active script, causal inferences are used CO find an answerable question vhich can be constructed as a state or action ~upliad by the original question. These inferences are represented by causal links [CullinKford, 1978] which connect the lCltel and actions of a stereotypical situation.The causal links used for this type of inference are RESULT (actions can result in state changes), ENABLE (states can enable action), and EESULT-ENA3LE (an action results in a state which enables an action).This last inference is so coumon that it is given a special link.In soma cases, the intermediate state is unimportant or unknown. In addition to causal links, temporal links are also represented to reason about the sequencing of actions.The causal inference process consists of locating a script paCtern of an active script which represents the scene of the script referred to by a question.The pattern matchfnE algorithm assures that the constants ~n the pattern are a super-class of the constants in the conceptual hierarchy of FRL frames. The variables in script patterns are the script roles which represent the common objects and actors of the script. The binding of script roles to subconcepts of a question conceptualization is subject to the recursive matching of patterns which indicate the common features of the roles. (This will be explained in more detail in the section on interactive script instantiation.)After the scene referenced by the user question is identified, a new question concept is constructed by substituting role bindings into patterus representing states or actions linked to the identified scene. It results in the aircraft being over the target which enables the aircraft to attack the target.The script pattern At-HIT-TARGET represents the propelling of a weapon toward the target. It results in the destruction of the target, and is followed by the aircraft flying back Co the airbase.The knowledge represented by these script patterns is needed to answer the question "What aircraft at Hahn can strike BE70701?". The answer produced by KNOBS, "Y-15s can reach BE70701 from Hahn.", requires a causal inference and a concept completion inference.The first step in producing this answer is to represent the meaning of the sentence.The conceptualization produced by APE-If is shown in Fig. 6a .A search for a question answering pattern to answer this fails, so causal inferences are tried. The question concept is identified Co he the AC-HIT-TARGET scene of the 0CA script, and the scene which RESULT-ENABLEs it, AC-FLY-TO-TARGET is instantiafied.This new question conceptualization is displayed in Fig 6b. A question answering pattern whose focus is (OBJECT IS-A) is found which could match the inferred question (Fig. 6c ). To enable this pattern to match the inferred question, the FROM case must be inferred.This is accomplished by a concept completion inference which produces the complete conceptualization shown in Fig. 6d .Finally, the action and response of the question answering are executed to calculate and print ~n answer.The script patterns which describe the relationships among the scenes of a situation are also used by the KNOBS system to guide a conversation about that domain.The conversation with KNOBS in Fig. 7 illustrates the entering of plan components by interactively insCantiating script patterns.The first user sentence instantiaces two script patterns (the flying of aircraft, and the striking of a target) and binds the script roles: TARGET Co BE70501, WING to 109TFW, AIRCRAFT-NUMBER to 4, and TIME-OVER-TARGET to 0900. KNOB~ asks the user to select the AIRCRAFT.Because the user replied with a question whose answer is an aircraft, KNOBS asks if the user would like would like to use chat aircraft am a component of the developing plan. This is accomplished by a rule that is activated when KNOBS asks the user to specify a plan component. The interpretation of the user s negative answer is handled by s rule activated when KNOBS asks a yes-no question. KNOBS checks the consistency of the user's answer and explains a constrainc which has failed.Then, the user corrects this problem, and KNOBS processes the extra information supplied by matching the meaning of the user's input to a script pattern. Send 4 aircraft from the Logcfv co sc:iks SE7050L at 0900. Whac aircraft do you vane to use7 What alrcrafc are in the I09TI~T The I09TFW has F-4Cs. WouLd you Like to use F-4Cs for the aircraft?NO, F-4Gs.The 10~r~ does nOC co~tain F-4Gs. F17 the P-4Gs out of the 126TFW st Eamscsia. A script role can be bound by matching against patterns associated with other script roles in addition to matching against script patterns. Fig. 8 shows a role pattern associated with the script role AIRCL~YT. This pattern serves two purposes: to prevent bindings to the script role vhichwould not make sense (i.e., the object which plays the AIRCRAFT role ~st be an aircraft) and to recursively bind other script roles to attached concepts.In this exemple, the AIRBASE or the ~NC could be attached to the AIRCRAFT concept, e.g., "F-4Cs from Hahn" or "F-dCa in the 126TFW".The interactive script interpreter is an alternative to the menu system provided by KNOBS for the entering of important components of a plan Co be checked for consistency.KNOBS also provides a means of automatically finishing the creation of a consistent plan.This can allow an experienced mission planner to enter a plan by typing one or two sentences and hitting a key which tells KNOBS co choose the unspecified components.To demonstrate their domain independence, the KNOBS System and APE-II have been provided with knowledge bases to plan and answer questions about naval "show of flag" missions.This version of KNOBS also uses FRL as a database language.A large portion of the question answering capability was directly applicable for a number of reasons.First of all, dictionary entries for frames are constructed automatically when they appear in a user query.The definitions of the attributes (slots) of a frame which are represented as RELATIONs are also constructed when needed. The definitions of many common words such as "be", "have", "a", "of", etc., would be useful in understanding questions in any domain. The question answering productions and concept completion inferences are separated into default and domain specific categories. Many of the simple but common queries are handled by default patterns. For example, "Which airbases have fighters?" and "What ports have cruisers?" are answered by the same default pattern. Currently, the Navy version of KNOBS has 3 domain specific question answering patterns, compared to 22 in the Air Force version. (There are 46 default patterns.)The most important knowledge structure missing in the Navy domain is the scripts which are needed to perform causal inferences and dialog directed planning. Therefore, the system can answer the question "What weapons does the Nimitz have?", but can't answer '~ihat weapons does the NimiCz carry?".We have argued that the processing of natural languaae database queries should be driven by the meaning of the input, as determined primarily by the emaninss of the constituent words. The zuechanisms provided for word sense selection and for the inference of missing meaning elements utilize a variety of knowledge sources.It is believed Chat this approach will prove more general and extensible than those based chiefly on the surface structure of the natural language query. Appendix:
null
null
null
null
{ "paperhash": [ "lehnert|the_process_of_question_answering", "charniak|six_topics_in_search_of_a_parser:_an_overview_of_ai_language_research", "engelman|interactive_frame_instantiation", "hendrix|developing_a_natural_language_interface_to_complex_data", "roberts|the_frl_manual", "riesbeck|comprehension_by_computer_:_expectation-based_analysis_of_sentences_in_context", "altman|a_conceptual_analysis", "katz|the_structure_of_a_semantic_theory", "grosz|the_representation_and_use_of_focus_in_dialogue_understanding.", "cullingford|script_application:_computer_understanding_of_newspaper_stories." ], "title": [ "The Process of Question Answering", "Six Topics in Search of a Parser: An Overview of AI Language Research", "Interactive Frame Instantiation", "Developing a natural language interface to complex data", "The FRL Manual", "Comprehension by computer : expectation-based analysis of sentences in context", "A Conceptual Analysis", "The structure of a semantic theory", "The representation and use of focus in dialogue understanding.", "Script application: computer understanding of newspaper stories." ], "abstract": [ "Abstract : Problems in computational question answering assume a new perspective when question answering is viewed as a problem in natural language processing. A theory of question answering has been proposed which relies on ideas in conceptual information processing and theories of human memory organization. This theory of question answering has been implemented in a computer program, QUALM, currently being used by two story understanding systems to complete a natural language processing system which reads stories and answers questions about what was read. The processes in QUALM are divided into 4 phases: (1) Conceptual categorization which guides subsequent processing by dictating which specific inference mechanisms and memory retrieval strategies should be invoked in the course of answering a question; (2) Inferential analysis which is responsible for understanding what the questioner really meant when a question should not be taken literally; (3) Content specification which determines how much of an answer should be returned in terms of detail and elaborations, and (4) Retrieval heuristics which do the actual digging to extract an answer from memory.", "My purpose in this paper is to give an overview of natural language understanding work within artificial intelligence (AI). 1 will concentrate on the problem of parsing going from natural language input to a semantic representation. Naturally, the form of semantic representation is a factor in such discussions, so it will receive some attention as well. Furthermore. 1 doubt that parsing can be completely isolated from text processing issues, and hence I will touch upon such seemingly non-parsing issues as script application. Nevertheless, the topic is parsing", "This paper discusses the requirements that interactive frame instantiation imposes on constraint verification. The representations and algorithms of an implemented software solution are presented.", "Aspects of an intelligent interface that provides natural language access to a large body of data distributed over a computer network are described. The overall system architecture is presented, showing how a user is buffered from the actual database management systems (DBMSs) by three layers of insulating components. These layers operate in series to convert natural language queries into calls to DBMSs at remote sites. Attention is then focused on the first of the insulating components, the natural language system. A pragmatic approach to language access that has proved useful for building interfaces to databases is described and illustrated by examples. Special language features that increase system usability, such as spelling correction, processing of incomplete inputs, and run-time system personalization, are also discussed. The language system is contrasted with other work in applied natural language processing, and the system's limitations are analyzed.", "Abstract : The Frame Representation Language (FRL) is described. FRL is an adjunct to LISP which implements several representation techniques suggested by Minsky's concept of a frame: defaults, constraints, inheritance, procedural attachment, and annotation. (Author)", "Abstract : ELI (English Language Interpreter) is a natural language parsing program currently used by several story understanding systems. ELI differs from most other parsers in that it: produces meaning representations (using Schank's Conceptual Dependency system) rather than syntactic structures; uses syntactic information only when the meaning can not be obtained directly; talks to other programs that make high level inferences that tie individual events into coherent episodes; uses context-based exceptions (conceptual and syntactic) to control its parsing routines. Examples of texts that ELI has understood, and details of how it works are given.", "This paper presents a ,theoretical analysis of the concept of privacy which emphasizes its role as an interpersonal boundary control process. The paper also analyzes mechanisms and dynamics of privacy, including verbal and paraverbal behavior, personal space, territorial behavior, and culturally based responses. Finally, several functions of privacy are proposed, including regulation of interpersonal interaction, self-other definitional processes, and self-identity. The concept of privacy appears in the literature of several disciplines-psychology, sociology, anthropology, political science, law, architecture, and the design professions. One group of definitions of the term emphasizes seclusion, withdrawal, and avoidance of interaction with others. For example:", "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].. Linguistic Society of America is collaborating with JSTOR to digitize, preserve and extend access to Language. 1. Introduction. This paperl does not attempt to present a semantic theory of a natural language, but rather to characterize the form of such a theory. A semantic theory of a natural language is part of a linguistic description of that language. Our problem, on the other hand, is part of the general theory of language, fully on a par with the problem of characterizing the structure of grammars of natural languages. A characterization of the abstract form of a semantic theory is given by a metatheory which answers such questions as these: What is the domain of a semantic theory? What are the descriptive and explanatory goals of a semantic theory? What mechanisms are employed in pursuit of these goals? What are the empirical and methodological constraints upon a semantic theory? The present paper approaches the problem of characterizing the form of semantic theories by describing the structure of a semantic theory of English. There can be little doubt but that the results achieved will apply directly to semantic theories of languages closely related to English. The question of their applicability to semantic theories of more distant languages will be left for subsequent investigations to explore. Nevertheless, the present investigation will provide results that can be applied to semantic theories of languages unrelated to English and suggestions about how to proceed with the construction of such theories. We may put our problem this way: What form should a semantic theory of a natural language take to accommodate in the most revealing way the facts about the semantic structure of that language supplied by descriptive research? This question is of primary importance at the present stage of the development of semantics because semantics suffers not from a dearth of facts about meanings and meaning relations in natural languages, but rather from the lack of an adequate theory to organize, systematize, and generalize these facts. Facts about the semantics of natural languages have been contributed in abundance by many diverse fields, including philosophy, linguistics, philology, and …", "Abstract : This report develops a representation of focus of attention thatcircumscribes discourse contexts within a general representation ofknowledge. Focus of attention is essential to any comprehension processbecause what and how a person understands is strongly influenced bywhere his attention is directed at a given moment. To formalize thenotion of focus, the need for and the use of focus mechanisms areconsidered from the standpoint of building a computer system that canparticipate in a natural language dialogue with a ser, Two ranges offocus, global and immediate, are investigated, and representations forincorporating them in a computer system are developed.The global focus in which an utterance is interpreted is determinedby the total discourse and situational setting of the utterance. Itinfluences what is talked about, how different concepts are introduced,and how concepts are referenced. To encode global focuscomputationally, a representation is developed that highlights thoseitems that are relevant at a given place in a dialogue. The underlyingknowledge representation is segmented into subunits, called focusspaces, that contain those items that are in the focus of attention of adialogue participant during a particular part of the dialogue.Mechanisms are required for updating the focus representation,because, as a dialogue progresses, the objects and actions that arerelevant to the conversation, and therefore in the participants' focusof attention, change. Procedures are described for deciding when andhow to shift focus in task-oriented dialogues, i.e., in dialogues inwhich the participants are cooperating in a shared task. Theseprocedures are guided by a representation of the task being performed.The ability to represent focus of attention in a languageunderstanding system results in a new approach to an important problemin discourse comprehension -- the identification of the referents ofdefinite noun phrases.", "Abstract : The report describes a computer story understander which applies knowledge of the world to comprehend what it reads. The system, called SAM, reads newspaper articles from a variety of domains, then demonstrates its understanding by summarizing or paraphrasing the text, or answering questions about it. (Author)" ], "authors": [ { "name": [ "W. Lehnert" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Eugene Charniak" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "C. Engelman", "E. Scarl", "C. Berg" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Hendrix", "E. Sacerdoti", "Daniel Sagalowicz", "Jonathan Slocum" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. B. Roberts", "I. Goldstein" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "C. Riesbeck", "R. Schank" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "I. Altman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Katz", "J. Fodor" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "B. Grosz" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. E. Cullingford" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "57370597", "14027944", "7886285", "15391397", "61042072", "60546035", "145075954", "9860676", "61114426", "60708295" ], "intents": [ [], [], [], [], [], [], [], [], [], [] ], "isInfluential": [ false, false, false, false, false, false, false, false, false, false ] }
null
504
0.077381
null
null
null
null
null
null
null
null
a2193e12514b8f366b4d0128a250bbb37fe49b7c
8893652
null
Menu-Based Natural Language Understanding
This paper describes the NLMenu System, a menu-based natural language understanding system. Rather than requiring the user to type his input to the system, input to NLMenu is made by selecting items from a set of dynamically changing menus. Active menus and items are determined by a predictive left-corner parser that accesses a semantic grammar and lexicon. The advantage of this approach is that all inputs to the NLMenu System can be understood thus giving a 0% failure rate. A companion system that can automatically generate interfaces to relational databases is also discussed.
{ "name": [ "Tennant, Harry R. and", "Ross, Kenneth M. and", "Saenz, Richard M. and", "Thompson, Craig W. and", "Miller, James R." ], "affiliation": [ null, null, null, null, null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
15
112
null
null
null
One class of problem that caused negative and false user expectations was the user's ability to distinguish between the limitations in the system's conceptual coverage and the system's linguistic coverage. Often, users would attempt to paraphrase a sentence many times when the reason for the system's lack of understanding was due to th~ fact that the system did not have data about the query being asked (i.e. the question exceeded the conceptual coverage of the system). Conversely, users' queries would often fail because they were phrased in a way that the system could not handle (i.e. the question exceeded the linguistic coverage of the system).Much research into the building of natural language interfaces has been going on for the past 15 years. The primary direction that this research has taken is to improve and extend the capabilities and coverage of natural language interfaces. Thus, work has focused on constructing and using new formalisms (both syntactically and semantically based) and on improving the grammars and/or semantics necessary for characterizing the range of sentences to be handled by the system. The ultimate goal of this work is to give natural language interfaces the ability to understand larger and larger classes of input sentences. Tennant (1980) is one of the few attempts to consider the problem of evaluating natural language interfaces. The results reported by Tennant concerning his evaluation of the PLANES System are discouraging. These results show that a major problem with PLANES was the negative expectations created by the system's inability to understand input sentences. The inability of PLANES to handle sentences that were input caused the users to infer that many other sentences wou|d not be correctly handled. These inferences about PLANES' capabilities resulted in much user frustration because of their very limited assumptions about what PLANES could understand. It rendered them unable to successfully solve many of the problems they were assigned as part of the evaluation of PLANES, even though these problems had been specifically designed to correspond to some The problem pointed out by Tennant seems to be a general problem that must be faced by any natural language interface. If the system is unable to understand user inputs, then the user will infer that many other sentences cannot be understood. Often, these expectations serve to severely limit the classes of sentences that users input, thus making the natural language interface virtually unusable for them. If natural language interfaces are to be made usable for novice users, with little or no knowledge of the domain of the system to which they are interfacing, then negative and false expectations about system capabilities and performance must be prevented.The most obvious way to prevent users of a natural language interface from having negative expectations is expand the coverage of that interface to the point where practically all inputs are understood. By doing this, most sentences that are input will be understood and few negative expectations will be created for the user. Then users will have enough confidence in the natural language interface to attempt to input a wide range of sentences, most of which will be understood. However, natural language interfaces with the ability to understand virtually all input sentences are far beyond current technology. Thus, users ~vill continue to have many negative expectations about system coverage.A possible solution to this problem is the use of a set of training sessions to teach the user the syntax of the system. However, there are several problems with this. First, it does not allow untrained novices to use such a system. Second, it assumes that infrequent users will take with them and remember what they learned about the coverage of the system. Both of these are unreasonable restrictions.In this paper, we will employ a technique that applies current technology (current grammar formalisms, parsing techniques, etc.) to make natural language interface systems meet the criteria of usability by novice users. To do this, user expectations must closely match system performance. Thus, the interface system must somehow make it clear to the user what the coverage of the system is. Rather than requiring the user to type his input to the natural language understanding system, the user is presented with a set of menus on the upper half of a high resolution bit map display. He can choose the words and phrases that make up his query with a mouse. As the user chooses items, they are inserted into a window on the lower half of the screen so that he can see the sentence he is constructing. As a sentence is constructed, the active menus and items in them change to reflect only. the legal choices, given the portion of the sentence that has already been input. At any point in the construction of a natural language sentence, only those words or phrases that could legally come next will be displayed for the user to select.Sentences which cannot be processed by the natural language system can never be input to the system, giving a 0% failure rate. In this way, the scope and limitations of the system are made immediately clear to the user and only understandable sentences can be input. Thus, all queries fall within the linguistic and conceptual coverage of the system.The grammars used in the NLMenu System are context-free semantic grammars written with phrase structure rules. These rules may contain the standard abbreviatory conventions used by linguists for writing phrase structure rules. Curly brackets ({}, sometimes called braces) are used to indicate optional elements in a rule. Additionally, square brackets ([]) are used as well. They have two uses. First, in conjunction with curly brackets. Since it is difficult to allow rules to be written in two dimensions as linguists do, where alternatives in curly brackets are written one below the other, we require that each alternative be put in square brackets. Thus, the rule below in (i) would be written as shown in (2).(2) A --> B {[C X] [E Y]} DNote that for single alternatives, the square brackets can be deleted without loss of information. We permit this and therefore {A B} is equivalent to {[A][B]}. The second use of square brackets is inside of parentheses. An example of this appears in rule (3) below.(3) Q --> R ([M N] V)This rule is an abbreviation for two rules, Q --> R M N and Q --> R V.Any arbitrary context-free grammar is permitted except for those grammars containing two classes of rules. These are rules of the form X --> null and rules that generate cycles, for example, A --> B, B --> C, C --> D and D --> A. The elimination of the second class of rules causes no difficulty and does not impair a grammar writer in any way. If the second class of rules were permitted, an infinite number of parses would result for sentences of grarm~ars using them. The elimination of the first class of rules causes a small inconvenience in that it prevents grammar writers from using the existence of null nodes in parse trees to account for certainunbounded dependencies like those found in questions like "Who do you think I saw?" which are said in some linguistic theories to contain a null noun phrase after the word "saw". However, alternative grammatical treatments, not requiring a null noun phrase, are also commonly used. Thus, the prohibition of such rules requires that these alternative grammatical treatments be used.In addition to synactic information indicating the allowable sentences, the grammar formalism also contains semantic information that determines what the meaning of each input sentence is. This is done by using lambda calculus. The mechanism is similar to the one used in Montague Grammar and the various theories that build on Montague's work. Associated with every word in the lexicon, there is a translation. This translation is a portion of the meaning of a sentence in which the word appears. In order to properly combine the translations of the words in a sentence together, there is a rule associated with each context-free rule indicating the order in which the translations of the symbols on the right side of the arrow of a context-free rule are to be combined. These rules are parenthesized lists of numbers where the number i refers to the first item after the arrow, the number 2 to the second, etc.For example, for the rule X --> A B C 0, a possible rule indicating how to combine translations might be (3 (I 2 4)).This rule means that the translation of A is taken as a function and applied to the translation of B as its argument. This resulting new translation is then taken as a function and applied to the translation of 4 as its argument. This resulting translation is then the argument to the translation of 3 which is the function.In general, the translation of leftmost number applies to the translation of the number to its right as the argument. The result of this then is a function which applies to the translation of the item to its right as the argument. However, parentheses can override this as in the example above. For rules containing abbreviatory conventions, one translation rule must be written for every possible expansion of the rule.Translations that are functions are of the form "(lambda x (... x ...)). When this is applied to an item like "c" as the argument, "c" is plugged in for every occurrence of x after the "lambda x" that is not within the scope of a more deeply embedded "lambda x". This is called lambda conversion and the result is just the expression with the "lambda x" stripped off of the front and the substitution made.The parser used in the NLMenu system is an implementation of an enhanced version of the modified left-corner algorithm described in Ross (1982) . Ross (1982) is a continuation of the work described in Ross (1981) and builds on that work and on the work of Griffiths and Petrick (1965) . The enhancements enable the parser to parse a word at a time and to predict the set of next possible words in a sentence, given the input that has come before. Griffiths and Petrick (1965) propose several algorithms for recognizing sentences of contextfree grammars in the general case. One of these algorithms, the NBT (Non-selective Bottom to Top) Algorithm, has since been called the "left-corner" algorithm. Of late, interest has been rekindled in left-corner parsers. Slocum (1981) shows that a left-corner parser inspired by Griffiths and Petrick's algorithm performs quite well when compared with parsers based on a Cocke-Kasami-Younger algorithm (see Younger 1967) .Although algorithms to recognize or parse context-free grammars can be stated in terms of push-down store automata, G+P state their algorithm in terms of Turing machines to make its operation clearer. A somewhat modified version of their algorithm will be given in the next section. These modifications transform the recognition algorithm into a parsing algorithm.The G+P algorithm employs two push down stacks. The modified algorithm to be given below will use three, called alpha, beta and gamma. Turing machine instructions are of the following form, where A, B, C, D, E and F can be arbitrary strings of symbols from the terminal and nonterminal alphabet.[A,B,C] ---> [D,E,F] if "Conditions"This is to be interpreted as follows-If A is on top of stack alpha, B is on top of stack beta, C is on top of stack gamma, and "Conditions" are satisfied then replace A by D, B by E, and C by F. ~,~] if A is in the set of nonterminals(2 [X,t,A] ---> [A X,if B is in the set of nonterminals or terminalsTo begln, put the terminal string to be parsed followed by END on stack alpha. Put the nonterminal which is to be the root node of the tree to be constructed followed by END on stack beta. Put END on stack gamma. The symbol t is neither a terminal nor a nonterminal. When END is on top of each stack, the string has been recognized. If none of the turing machine instructions apply and END is not on the top of each stack, the path which led to this situation was a bad path and does not yield a valid parse.The rules necessary to give a parse tree can be stated informally (i.e. not in terms of turing machine instructions) as follows:When (I) is applied, attach Vl beneath A.When 3is applied, attach the B on alpha B as the right daughter of the top symbol on gamma.Note that there is a formal statement of the parsing version of NBT in Griffiths (1965) . However, it is somewhat more complicated and obscures what is going on during the parse. Therefore, the informal procedure given above will be used instead.The SBT (Selective Bottom to Top) algorithm is a selective version of the NBT algorithm and is also given in G+P. The only difference between the two is that the SBT algorithm employs a selective technique for increasing the efficiency of the algorithm. In the terminology of G+P, a selective technique is one that eliminates bad parse paths before trying them. The selective technique employed is the use of a reachability matrix. A reachability matrix indicates whether each non-terminal node in the grammar can dominate each terminal or non-terminal in the grammar in a tree where that terminal or non-terminal is on the left-most branch. To use it, an additional condition is put on rule (i) requiring that X can reach down to A. Ross (1981) modifies the SBT Algorithm to directly handle grammar rules utilizing several abbreviatory conventions that are often used when writing grammars. Thus, parentheses (indicating optional nodes) and curly brackets (indicating that the items within are alternatives) can appear in rules that the parser accesses when parsing a string. These modifications will not be discussed in this paper but the parser employed in the NLMenu System incorporates them because efficiency is increased, as discussed in Ross (1981) .At this point, the statement of the algorithm is completely neutral with respect to control structure. At the beginning of a parse, there is only one 3-tuple. However, because the algorithm is non-deterministic, there are potentially points during a parse at which more than one turing machine instruction can apply. Each of the parse paths resulting from an application of a different turing machine instruction to the same parser state sends the parser off on a possible parse path. Each of these possible paths could result in a valid parse and all must be followed to completion. In order to assure this, it is necessary to proceed in some principled way.One strategy is to push one state as far as it will go. That is, apply one of the rules that are applicable, get a new state, and then apply one of the applicable rules to that new state. This can continue until either no rules apply or a parse is found. If no rules apply, it was a bad parse path. If a parse is found, it is one of possibly many parses for the sentence. In either case, the algorithm must continue on and pursue all other alternative paths. One way to do this and assure that all alternatives are pursued is to backtrack to the last choice point, pick another applicable rule, and continue in the manner described earlier. By doing this until the parser has backed up throughall possible choice points, all parses of the sentence will be found. A parser that works in this manner is a depthfirst backtracking parser. This is probably the most straightforward control structure for a leftcorner parser.Alternative control structures are possible. Rather than pursuing one path as far as possible, one could go down one parse path, leave that path before it is finished and then start another. The first parse path could then be pursued later from the point at which it was stopped. It is necessary to use an alternative control structure to enable parsing to begin before the entire input string is available.To enable the parser to function as described above, the control structure for a depth-first parser described earlier is used. To introduce the ability to begin parsing given only a subset of the input string, the item MORE is inserted after the last input item that is given to the parser. If no other instructions apply and MORE is on top of stack alpha, the parser must begin to backtrack as described earlier. Additionally, the contents of stack beta and gamma must be saved. Once all backtracking is completed, additional input is put on alpha and parsing begins again with a set of states, each containing the new input string on alpha and one of the saved tuples containing beta and gamma. Each of these states is a distinct parse path.To parse a word at a time, the first word of the sentence followed by MORE is put on alpha. The parser will then go as far as it can, given this word, and a set of tuples containing beta and gamma will result. Then, each of these tuples along with the next word is passed to the parser. The ability to parse a word at a time is essential for the NLMenu System. However, it is also beneficial for more traditional natural language interfaces. It can increase the perceived speed of any parser since work can proceed as the user is typing and composing his input. Note that a rubout facility can be added by saving the betagamma tuples that result after parsing for each of the words. Such a facility is used by the NLMenu System.The ability to predict the set of possible nth words of a sentence, given the first n-1 words of the sentence is the final modification necessary to enable this parser to be used for menu-based natural language understanding. This feature can be added in a straightforward way. Given any beta-gamma pair representing one of the parse paths active after n-1 words of the sentence have been input, it is possible to determine the set of words that will allow that state to continue. This is by examing the top-most symbol on stack beta of the tuple. It represents the most immediate goal of that parse state. To determine all the words that can come next, given that goal, the set of all nodes that are reachable from that node as a left daughter must be determined. This information is easily obtainable from the reachability matrix discussed earlier. Once the set of reachable nodes is determined, all that need be done is find the subset of these that can dominate lexical material. If this is done for all of the beta-gamma pairs that resulted after parsing the first n-1 words and the union of the sets that result is taken, the resulting set is a list of all of the lexical categories that could come next. The list of next words is easily determined from this.Although a wide class of applications are appropriate for menu-based natural language interfaces, our effort thus far has concentrated on building interfaces to relational databases. This has had several important consequences. First, it has made it easy to compare our interfaces to those that have been built by others because a prime application area for natural language interfaces has been to databases. Second, the process of producing an interface to any arbitrary set of relations has been automated.We have run a series of pilot studies to evaluate the performance of an NLMenu interface to the parts-suppliers database described in Data (1977) . These studies were similar to the ones described in Tennant (1980) that evaluated the PLANES system. Our results were more encouraging than Tennant's. They indicated that both experienced computer users and naive subjects can successfully use a menu-based natural language interface to a database to solve problems. All subjects were successfully able to solve all of their problems.Comments from subjects indicated that although the phrasing of a query might not have been exactly how the subject would have chosen to ask the question in an unconstrained, traditional system, the subjects were not bothered by this and could find the alternative phrasing without any difficulty. One factor that appeared to be important in this was the displaying of the entire set of menus at all times. In cases where it was not clear which item on an active menu would lead to the users desired query, users looked at the inactive menus for hints on how to proceed. Additionally, the existence of a rubout facility that enabled users to rubout phrases they had input as far back as desired encouraged them to explore the system to determine how a sentence might be phrased. There was no penalty for choosing an item which did not allow a user to continue his question in the way he desired. All that the user had to do was rub it out and pick again.The system outlined in this section is a companion system to NLMenu. It allows NLMenu interfaces to an arbitrary set of relations to be constructed in a quick and concise way. Other researchers have examined the problem of constructing portable natural language interfaces. These include Kaplan (1979) , Harris (1979) , Hendrix and Lewis (1981) , and Grosz et. al. (1982) . While the work described here shares similarities, it differs in several ways. Our interface specification dialogue is simple, short, and is supported by the database data dictionary. It is intended for the informed user, not necessarily a database designer and certainly Dot a grammar expert. Information is obtained from this informed user through a menu-based natural language dialogue. Thus, the interface that builds interfaces is extremely easy to use.The system for automatically generating NLMenu interfaces to relational databases is divided into two basic components. One component, BUILD-INTERFACE, produces a domain specific data structure called a "portable spec" by engaging the user in an NLMenu dialog. The other component, MAKE-PORTABLE-INTERFACE, generates a semantic grammar and lexicon from the "portable spec".The MAKEZPORTABLE-INTERFACE component takes as input a "portable spec", uses it to instantiate a domain independent core grammar and lexicon, and returns a semantic grammar and a semantic lexicon pair, which defines an NLMENU interface. The core grammar and lexicon can be small (21 grammar rules and 40 lexical entries at present), but the size of the resulting semantic grammars and lexicons will depend on the portable spec.A portable-spec consists of a list of categories. The categories are as follows. The COVERED TABLES list specifies all relations or views that the interface will cover. The retrieval, insertion, deletion and modification relations specify ACCESS RIGHTS for the covered tables. Non-numeric attributes, CLASSIFY ATTRI-BUTES according to type. Computable attributes are numeric attributes that are averageable, summable, etc. A user may choose not to cover some attributes in interface. IDENTIFYING ATTRI-BUTES are attributes that can be used to identify the rows. Typically, identifying-attributes will include the key attributes, but may include other attributes if they better identify tuples (rows) or may even not include a full key if one seeks to identify sets of rows together. TWO TABLE JOINS specify supported join paths between tables. THREE TABLE JOINS specify supported "relationships" (in the entity-relationship data model sense) where one relation relates 2 others. The EDITED ITEMS specification records old and new values for menu phrases and the window they appear in. The EDITED HELP provides a way for users to add to, modify or replace automatically generated help messages associated with a menu item. Values to these last categories record changes that a user makes to his default menu screen to customize phrasings or help messages for an application.The BUILD-INTERFACES component is a menubased natural language interface and thus is really another application of the NLMenu system to an interface problem. It elicits the information required to build up a "portable spec" from the user. In addition to allowing the user to create an interface, it also allows the user to modify or combine existing interfaces. The user may also grant interfaces to other users, revoke them, or drop them. The database management system controls which users have access to which interfaces.
The system for automatically constructing NLMenu interfaces enjoys seyeral practical and theoretical advantages. These advantages are outlined below.End-users can construct natural language interfaces to their own data in minutes, notweeks or years, and without the aid of a grammar specialist. There is heavy dependence on a data dictionary but not on linguistic information.The interface builder can control coverage. He can decide to make an interface that covers only a semantically related subset of his tables. He can choose to include some attributes and hide other attributes so that they cannot be mentioned. He can choose to support various kinds of joins with natural language phrases. He can mirror the access rights of a user in his interface, so that the interface will allow him to insert, delete, and modify as well as just retrieve and only from those tables that he has the specified privileges on. Thus, interfaces are highly tunable and the term "coverage" can be given precise definition. Patchy coverage is avoided because of the uniform way in which the interface is constructed.Automatically generated natural language interfaces are robust with respect to database changes; interfaces are easy to change if the user adds or deletes tables or changes table descriptions. One need only modify the portable spec to reflect the changes and regenerate the interface.Automatically generated NLMenu interfaces are guaranteed to be correct (bug free). The interaction in which users specify the parameters defining an interface, ensures that parameters are valid, i.e. they correspond to real tables, attributes and domains. Instantiating a debugged core grammar with valid parameters yields a correct interface.Natural language interfaces are constructed from semantically related tables that the user owns or has been granted and they reflect his access privileges (retrieval), insertion, etc). By extension, natural language interfaces become database objects in their own right. They are sharable (grantable and revokable) in a controlled way. A user can have several such NLMenu interfaces. Each gives him a user-view of a semantically related set of data. This notion of a view is like the notion of a database schema found in network and hierarchical but not relational systems. In relational systems, there is no convenient way for grouping tables together that are semantically related. Furthermore, an NLMenu interface can be treated as an object and can be granted to other users, so a user acting as a database administrator can make NLMenu interfaces for classes of users too naive to build them themselves (like executives). Furthermore, interfaces can be combined by merging portable specs and so user's can combine different, related userviews if they wish.Since an interface covers exactly and only the data and operations that the user chooses, it can be considered to be a "model of the user" in that it provide a well-bounded language that reflects a semantically related view of the user's data and operations.A final advantage is that even if an automatically generated interface is for some reason not quite what is needed for some application, it is much easier to first generate an interface this way and then modify it to suit specific needs than it is to build the entire interface by hand. This has been demonstrated already in the prototype where an automatically generated interface required for an appliction for another group at TI was manually altered to provide pictorial database capabilities.Taken together, the advantages listed above pave the way for low cost, maintainable interfaces to relational database systems. Many of the advantages are novel when considered with respect to past work. This approach makes it possible for a much broader class of users and applications to use menu-based, natural language interfaces to databases.The NLMenu system does not store the words that correspond to open class data base attributes in the lexicon as many other systems do. Instead, a meta category called an "expert" is stored in the lexicon. They may be user supplied or defaulted and they are arbitrary chunks of code. Possible implementations include directly doing a database lookup and presenting the user with a list of items to choose from or presenting the user with a type in window which is constrained to only allow input in the desired type or format (for example, for a date).Many systems allow ellipsis to permit the user to, in effect, ask a parameterized query. We approach this problem by making all phrases that were generated by experts be "mouse sensitive" in the sentence. To change the value of a data item, all that needs to be done is to move the mouse over the sentence. When a data item is encountered, it is boxed by the mouse cursor. To change it, one merely clicks on the mouse. The expert which originally produced that data item is then called, allowing the user to change that item to something else.The grammars produced by the automatic generation system permit ambiguity. However, the ambiguity occurs in a small set of welldefined situations involving relative clause attachment. Because of this, it has been possible to define a bracketed and indented format that clearly indicates the source of ambiguity to the user and allows him to choose between alternative readings. Additionally, by constraining the parser to obey several human parsing strategies, as described in Ross (1981) , the user is displayed a set of possible readings in which the most likely candidate comes first.The user is told that the firs't bracketed structure is most probably the one he intended.The menu approach to natural language input has many advantages over the traditional typing approach. Most importantly, every sentence that is input is understood. Thus, a 100% success rate for queries input is achieved. Implementation time is greatly decreased because the grammars required can be much smaller. Generally, writing a thorough grammar for an application of a natural language understanding system consumes most of the development time. Note that the reason larger grammars are needed in traditional systems is that every possible paraphrase of a sentence must be understood. In a menu-based system, only one paraphrase is needed. The user will be guided to this paraphrase by the menus.The fact that the menu-based natural language understanding systems guide the user to the input he desires is also beneficial for two other reasons. First, confused users who don't know how to formulate their input need not compose their input without help. They only need to recognize their input by looking at the menus. They need not formulate their input in a vacuum. Secondly, the extent of the system's conceptual coverage will be apparent. The user will immediately know what the system knows about and what it does not know about.Only allowing for one paraphrase of each allowable query not only makes the grammar smaller. The lexicon is smaller as well. NLMenu lexicons must be smaller because if they were the size of a lexicon standardly used for a natural language interface, the menus would be much too large and would therefore be unmanageable. Thus, it is possible that limitations will be imposed on the system by the size of the menus. Menus can necessarily not be too big or the user will be swamped with choices and will be unable to find the right one. Several points must be made here. First, even though an inactive menu containing, say, a class of modifiers, might have one hundred modifiers, it is likely that all of these will never be active at the same time. Given a semantic grammar with five different classes of nouns, it will most likely be the case that only one fifth of the modifiers will make sense as a modifier for any of those nouns. Thus, an active modifier menu will have roughly twenty items in it. We have constructed NLMenu interfaces to about ten databases, some reasonably large, and we have had no problem with the size of the menus getting unmanageable.The NLMenu System and the companion system to automatically build NLMenu interfaces that are described in this paper are both implemented in Lisp Machine Lisp on an LMI Lisp Machine. It has also proved to be feasible to put them on a microcomputer. Two factors were responsible for this: the word by word parse and the smaller grammars. Parsing a word at a time means that most of the work necessary to parse a sentence is done before the sentence has been completely input. Thus, the perceived parse time is much less than it otherwise would be. Parse time is also made faster by the smaller grammars because it is a function of grammar size so the smaller the grammar, the faster the parse will be performed. Smaller grammars can be dealt with much more easily on a microcomputer with limited memory available. Both systems have been implemented in C on the Texas Instruments Professional Computer. These implementation are based on the Lisp Machine implementations but were done by another division of TI. These second implementations will be available as a software package that will interface either locally to RSI s Oracle relational DBMS which uses S .... as the query language or to various remote computers running DBMS's that use SQL 3.0 as their query language.
null
Main paper: advantages: The system for automatically constructing NLMenu interfaces enjoys seyeral practical and theoretical advantages. These advantages are outlined below.End-users can construct natural language interfaces to their own data in minutes, notweeks or years, and without the aid of a grammar specialist. There is heavy dependence on a data dictionary but not on linguistic information.The interface builder can control coverage. He can decide to make an interface that covers only a semantically related subset of his tables. He can choose to include some attributes and hide other attributes so that they cannot be mentioned. He can choose to support various kinds of joins with natural language phrases. He can mirror the access rights of a user in his interface, so that the interface will allow him to insert, delete, and modify as well as just retrieve and only from those tables that he has the specified privileges on. Thus, interfaces are highly tunable and the term "coverage" can be given precise definition. Patchy coverage is avoided because of the uniform way in which the interface is constructed.Automatically generated natural language interfaces are robust with respect to database changes; interfaces are easy to change if the user adds or deletes tables or changes table descriptions. One need only modify the portable spec to reflect the changes and regenerate the interface.Automatically generated NLMenu interfaces are guaranteed to be correct (bug free). The interaction in which users specify the parameters defining an interface, ensures that parameters are valid, i.e. they correspond to real tables, attributes and domains. Instantiating a debugged core grammar with valid parameters yields a correct interface.Natural language interfaces are constructed from semantically related tables that the user owns or has been granted and they reflect his access privileges (retrieval), insertion, etc). By extension, natural language interfaces become database objects in their own right. They are sharable (grantable and revokable) in a controlled way. A user can have several such NLMenu interfaces. Each gives him a user-view of a semantically related set of data. This notion of a view is like the notion of a database schema found in network and hierarchical but not relational systems. In relational systems, there is no convenient way for grouping tables together that are semantically related. Furthermore, an NLMenu interface can be treated as an object and can be granted to other users, so a user acting as a database administrator can make NLMenu interfaces for classes of users too naive to build them themselves (like executives). Furthermore, interfaces can be combined by merging portable specs and so user's can combine different, related userviews if they wish.Since an interface covers exactly and only the data and operations that the user chooses, it can be considered to be a "model of the user" in that it provide a well-bounded language that reflects a semantically related view of the user's data and operations.A final advantage is that even if an automatically generated interface is for some reason not quite what is needed for some application, it is much easier to first generate an interface this way and then modify it to suit specific needs than it is to build the entire interface by hand. This has been demonstrated already in the prototype where an automatically generated interface required for an appliction for another group at TI was manually altered to provide pictorial database capabilities.Taken together, the advantages listed above pave the way for low cost, maintainable interfaces to relational database systems. Many of the advantages are novel when considered with respect to past work. This approach makes it possible for a much broader class of users and applications to use menu-based, natural language interfaces to databases. features of nlmenu interfaces to databases: The NLMenu system does not store the words that correspond to open class data base attributes in the lexicon as many other systems do. Instead, a meta category called an "expert" is stored in the lexicon. They may be user supplied or defaulted and they are arbitrary chunks of code. Possible implementations include directly doing a database lookup and presenting the user with a list of items to choose from or presenting the user with a type in window which is constrained to only allow input in the desired type or format (for example, for a date).Many systems allow ellipsis to permit the user to, in effect, ask a parameterized query. We approach this problem by making all phrases that were generated by experts be "mouse sensitive" in the sentence. To change the value of a data item, all that needs to be done is to move the mouse over the sentence. When a data item is encountered, it is boxed by the mouse cursor. To change it, one merely clicks on the mouse. The expert which originally produced that data item is then called, allowing the user to change that item to something else.The grammars produced by the automatic generation system permit ambiguity. However, the ambiguity occurs in a small set of welldefined situations involving relative clause attachment. Because of this, it has been possible to define a bracketed and indented format that clearly indicates the source of ambiguity to the user and allows him to choose between alternative readings. Additionally, by constraining the parser to obey several human parsing strategies, as described in Ross (1981) , the user is displayed a set of possible readings in which the most likely candidate comes first.The user is told that the firs't bracketed structure is most probably the one he intended.The menu approach to natural language input has many advantages over the traditional typing approach. Most importantly, every sentence that is input is understood. Thus, a 100% success rate for queries input is achieved. Implementation time is greatly decreased because the grammars required can be much smaller. Generally, writing a thorough grammar for an application of a natural language understanding system consumes most of the development time. Note that the reason larger grammars are needed in traditional systems is that every possible paraphrase of a sentence must be understood. In a menu-based system, only one paraphrase is needed. The user will be guided to this paraphrase by the menus.The fact that the menu-based natural language understanding systems guide the user to the input he desires is also beneficial for two other reasons. First, confused users who don't know how to formulate their input need not compose their input without help. They only need to recognize their input by looking at the menus. They need not formulate their input in a vacuum. Secondly, the extent of the system's conceptual coverage will be apparent. The user will immediately know what the system knows about and what it does not know about.Only allowing for one paraphrase of each allowable query not only makes the grammar smaller. The lexicon is smaller as well. NLMenu lexicons must be smaller because if they were the size of a lexicon standardly used for a natural language interface, the menus would be much too large and would therefore be unmanageable. Thus, it is possible that limitations will be imposed on the system by the size of the menus. Menus can necessarily not be too big or the user will be swamped with choices and will be unable to find the right one. Several points must be made here. First, even though an inactive menu containing, say, a class of modifiers, might have one hundred modifiers, it is likely that all of these will never be active at the same time. Given a semantic grammar with five different classes of nouns, it will most likely be the case that only one fifth of the modifiers will make sense as a modifier for any of those nouns. Thus, an active modifier menu will have roughly twenty items in it. We have constructed NLMenu interfaces to about ten databases, some reasonably large, and we have had no problem with the size of the menus getting unmanageable.The NLMenu System and the companion system to automatically build NLMenu interfaces that are described in this paper are both implemented in Lisp Machine Lisp on an LMI Lisp Machine. It has also proved to be feasible to put them on a microcomputer. Two factors were responsible for this: the word by word parse and the smaller grammars. Parsing a word at a time means that most of the work necessary to parse a sentence is done before the sentence has been completely input. Thus, the perceived parse time is much less than it otherwise would be. Parse time is also made faster by the smaller grammars because it is a function of grammar size so the smaller the grammar, the faster the parse will be performed. Smaller grammars can be dealt with much more easily on a microcomputer with limited memory available. Both systems have been implemented in C on the Texas Instruments Professional Computer. These implementation are based on the Lisp Machine implementations but were done by another division of TI. These second implementations will be available as a software package that will interface either locally to RSI s Oracle relational DBMS which uses S .... as the query language or to various remote computers running DBMS's that use SQL 3.0 as their query language. : One class of problem that caused negative and false user expectations was the user's ability to distinguish between the limitations in the system's conceptual coverage and the system's linguistic coverage. Often, users would attempt to paraphrase a sentence many times when the reason for the system's lack of understanding was due to th~ fact that the system did not have data about the query being asked (i.e. the question exceeded the conceptual coverage of the system). Conversely, users' queries would often fail because they were phrased in a way that the system could not handle (i.e. the question exceeded the linguistic coverage of the system).Much research into the building of natural language interfaces has been going on for the past 15 years. The primary direction that this research has taken is to improve and extend the capabilities and coverage of natural language interfaces. Thus, work has focused on constructing and using new formalisms (both syntactically and semantically based) and on improving the grammars and/or semantics necessary for characterizing the range of sentences to be handled by the system. The ultimate goal of this work is to give natural language interfaces the ability to understand larger and larger classes of input sentences. Tennant (1980) is one of the few attempts to consider the problem of evaluating natural language interfaces. The results reported by Tennant concerning his evaluation of the PLANES System are discouraging. These results show that a major problem with PLANES was the negative expectations created by the system's inability to understand input sentences. The inability of PLANES to handle sentences that were input caused the users to infer that many other sentences wou|d not be correctly handled. These inferences about PLANES' capabilities resulted in much user frustration because of their very limited assumptions about what PLANES could understand. It rendered them unable to successfully solve many of the problems they were assigned as part of the evaluation of PLANES, even though these problems had been specifically designed to correspond to some The problem pointed out by Tennant seems to be a general problem that must be faced by any natural language interface. If the system is unable to understand user inputs, then the user will infer that many other sentences cannot be understood. Often, these expectations serve to severely limit the classes of sentences that users input, thus making the natural language interface virtually unusable for them. If natural language interfaces are to be made usable for novice users, with little or no knowledge of the domain of the system to which they are interfacing, then negative and false expectations about system capabilities and performance must be prevented.The most obvious way to prevent users of a natural language interface from having negative expectations is expand the coverage of that interface to the point where practically all inputs are understood. By doing this, most sentences that are input will be understood and few negative expectations will be created for the user. Then users will have enough confidence in the natural language interface to attempt to input a wide range of sentences, most of which will be understood. However, natural language interfaces with the ability to understand virtually all input sentences are far beyond current technology. Thus, users ~vill continue to have many negative expectations about system coverage.A possible solution to this problem is the use of a set of training sessions to teach the user the syntax of the system. However, there are several problems with this. First, it does not allow untrained novices to use such a system. Second, it assumes that infrequent users will take with them and remember what they learned about the coverage of the system. Both of these are unreasonable restrictions.In this paper, we will employ a technique that applies current technology (current grammar formalisms, parsing techniques, etc.) to make natural language interface systems meet the criteria of usability by novice users. To do this, user expectations must closely match system performance. Thus, the interface system must somehow make it clear to the user what the coverage of the system is. Rather than requiring the user to type his input to the natural language understanding system, the user is presented with a set of menus on the upper half of a high resolution bit map display. He can choose the words and phrases that make up his query with a mouse. As the user chooses items, they are inserted into a window on the lower half of the screen so that he can see the sentence he is constructing. As a sentence is constructed, the active menus and items in them change to reflect only. the legal choices, given the portion of the sentence that has already been input. At any point in the construction of a natural language sentence, only those words or phrases that could legally come next will be displayed for the user to select.Sentences which cannot be processed by the natural language system can never be input to the system, giving a 0% failure rate. In this way, the scope and limitations of the system are made immediately clear to the user and only understandable sentences can be input. Thus, all queries fall within the linguistic and conceptual coverage of the system.The grammars used in the NLMenu System are context-free semantic grammars written with phrase structure rules. These rules may contain the standard abbreviatory conventions used by linguists for writing phrase structure rules. Curly brackets ({}, sometimes called braces) are used to indicate optional elements in a rule. Additionally, square brackets ([]) are used as well. They have two uses. First, in conjunction with curly brackets. Since it is difficult to allow rules to be written in two dimensions as linguists do, where alternatives in curly brackets are written one below the other, we require that each alternative be put in square brackets. Thus, the rule below in (i) would be written as shown in (2).(2) A --> B {[C X] [E Y]} DNote that for single alternatives, the square brackets can be deleted without loss of information. We permit this and therefore {A B} is equivalent to {[A][B]}. The second use of square brackets is inside of parentheses. An example of this appears in rule (3) below.(3) Q --> R ([M N] V)This rule is an abbreviation for two rules, Q --> R M N and Q --> R V.Any arbitrary context-free grammar is permitted except for those grammars containing two classes of rules. These are rules of the form X --> null and rules that generate cycles, for example, A --> B, B --> C, C --> D and D --> A. The elimination of the second class of rules causes no difficulty and does not impair a grammar writer in any way. If the second class of rules were permitted, an infinite number of parses would result for sentences of grarm~ars using them. The elimination of the first class of rules causes a small inconvenience in that it prevents grammar writers from using the existence of null nodes in parse trees to account for certainunbounded dependencies like those found in questions like "Who do you think I saw?" which are said in some linguistic theories to contain a null noun phrase after the word "saw". However, alternative grammatical treatments, not requiring a null noun phrase, are also commonly used. Thus, the prohibition of such rules requires that these alternative grammatical treatments be used.In addition to synactic information indicating the allowable sentences, the grammar formalism also contains semantic information that determines what the meaning of each input sentence is. This is done by using lambda calculus. The mechanism is similar to the one used in Montague Grammar and the various theories that build on Montague's work. Associated with every word in the lexicon, there is a translation. This translation is a portion of the meaning of a sentence in which the word appears. In order to properly combine the translations of the words in a sentence together, there is a rule associated with each context-free rule indicating the order in which the translations of the symbols on the right side of the arrow of a context-free rule are to be combined. These rules are parenthesized lists of numbers where the number i refers to the first item after the arrow, the number 2 to the second, etc.For example, for the rule X --> A B C 0, a possible rule indicating how to combine translations might be (3 (I 2 4)).This rule means that the translation of A is taken as a function and applied to the translation of B as its argument. This resulting new translation is then taken as a function and applied to the translation of 4 as its argument. This resulting translation is then the argument to the translation of 3 which is the function.In general, the translation of leftmost number applies to the translation of the number to its right as the argument. The result of this then is a function which applies to the translation of the item to its right as the argument. However, parentheses can override this as in the example above. For rules containing abbreviatory conventions, one translation rule must be written for every possible expansion of the rule.Translations that are functions are of the form "(lambda x (... x ...)). When this is applied to an item like "c" as the argument, "c" is plugged in for every occurrence of x after the "lambda x" that is not within the scope of a more deeply embedded "lambda x". This is called lambda conversion and the result is just the expression with the "lambda x" stripped off of the front and the substitution made.The parser used in the NLMenu system is an implementation of an enhanced version of the modified left-corner algorithm described in Ross (1982) . Ross (1982) is a continuation of the work described in Ross (1981) and builds on that work and on the work of Griffiths and Petrick (1965) . The enhancements enable the parser to parse a word at a time and to predict the set of next possible words in a sentence, given the input that has come before. Griffiths and Petrick (1965) propose several algorithms for recognizing sentences of contextfree grammars in the general case. One of these algorithms, the NBT (Non-selective Bottom to Top) Algorithm, has since been called the "left-corner" algorithm. Of late, interest has been rekindled in left-corner parsers. Slocum (1981) shows that a left-corner parser inspired by Griffiths and Petrick's algorithm performs quite well when compared with parsers based on a Cocke-Kasami-Younger algorithm (see Younger 1967) .Although algorithms to recognize or parse context-free grammars can be stated in terms of push-down store automata, G+P state their algorithm in terms of Turing machines to make its operation clearer. A somewhat modified version of their algorithm will be given in the next section. These modifications transform the recognition algorithm into a parsing algorithm.The G+P algorithm employs two push down stacks. The modified algorithm to be given below will use three, called alpha, beta and gamma. Turing machine instructions are of the following form, where A, B, C, D, E and F can be arbitrary strings of symbols from the terminal and nonterminal alphabet.[A,B,C] ---> [D,E,F] if "Conditions"This is to be interpreted as follows-If A is on top of stack alpha, B is on top of stack beta, C is on top of stack gamma, and "Conditions" are satisfied then replace A by D, B by E, and C by F. ~,~] if A is in the set of nonterminals(2 [X,t,A] ---> [A X,if B is in the set of nonterminals or terminalsTo begln, put the terminal string to be parsed followed by END on stack alpha. Put the nonterminal which is to be the root node of the tree to be constructed followed by END on stack beta. Put END on stack gamma. The symbol t is neither a terminal nor a nonterminal. When END is on top of each stack, the string has been recognized. If none of the turing machine instructions apply and END is not on the top of each stack, the path which led to this situation was a bad path and does not yield a valid parse.The rules necessary to give a parse tree can be stated informally (i.e. not in terms of turing machine instructions) as follows:When (I) is applied, attach Vl beneath A.When 3is applied, attach the B on alpha B as the right daughter of the top symbol on gamma.Note that there is a formal statement of the parsing version of NBT in Griffiths (1965) . However, it is somewhat more complicated and obscures what is going on during the parse. Therefore, the informal procedure given above will be used instead.The SBT (Selective Bottom to Top) algorithm is a selective version of the NBT algorithm and is also given in G+P. The only difference between the two is that the SBT algorithm employs a selective technique for increasing the efficiency of the algorithm. In the terminology of G+P, a selective technique is one that eliminates bad parse paths before trying them. The selective technique employed is the use of a reachability matrix. A reachability matrix indicates whether each non-terminal node in the grammar can dominate each terminal or non-terminal in the grammar in a tree where that terminal or non-terminal is on the left-most branch. To use it, an additional condition is put on rule (i) requiring that X can reach down to A. Ross (1981) modifies the SBT Algorithm to directly handle grammar rules utilizing several abbreviatory conventions that are often used when writing grammars. Thus, parentheses (indicating optional nodes) and curly brackets (indicating that the items within are alternatives) can appear in rules that the parser accesses when parsing a string. These modifications will not be discussed in this paper but the parser employed in the NLMenu System incorporates them because efficiency is increased, as discussed in Ross (1981) .At this point, the statement of the algorithm is completely neutral with respect to control structure. At the beginning of a parse, there is only one 3-tuple. However, because the algorithm is non-deterministic, there are potentially points during a parse at which more than one turing machine instruction can apply. Each of the parse paths resulting from an application of a different turing machine instruction to the same parser state sends the parser off on a possible parse path. Each of these possible paths could result in a valid parse and all must be followed to completion. In order to assure this, it is necessary to proceed in some principled way.One strategy is to push one state as far as it will go. That is, apply one of the rules that are applicable, get a new state, and then apply one of the applicable rules to that new state. This can continue until either no rules apply or a parse is found. If no rules apply, it was a bad parse path. If a parse is found, it is one of possibly many parses for the sentence. In either case, the algorithm must continue on and pursue all other alternative paths. One way to do this and assure that all alternatives are pursued is to backtrack to the last choice point, pick another applicable rule, and continue in the manner described earlier. By doing this until the parser has backed up throughall possible choice points, all parses of the sentence will be found. A parser that works in this manner is a depthfirst backtracking parser. This is probably the most straightforward control structure for a leftcorner parser.Alternative control structures are possible. Rather than pursuing one path as far as possible, one could go down one parse path, leave that path before it is finished and then start another. The first parse path could then be pursued later from the point at which it was stopped. It is necessary to use an alternative control structure to enable parsing to begin before the entire input string is available.To enable the parser to function as described above, the control structure for a depth-first parser described earlier is used. To introduce the ability to begin parsing given only a subset of the input string, the item MORE is inserted after the last input item that is given to the parser. If no other instructions apply and MORE is on top of stack alpha, the parser must begin to backtrack as described earlier. Additionally, the contents of stack beta and gamma must be saved. Once all backtracking is completed, additional input is put on alpha and parsing begins again with a set of states, each containing the new input string on alpha and one of the saved tuples containing beta and gamma. Each of these states is a distinct parse path.To parse a word at a time, the first word of the sentence followed by MORE is put on alpha. The parser will then go as far as it can, given this word, and a set of tuples containing beta and gamma will result. Then, each of these tuples along with the next word is passed to the parser. The ability to parse a word at a time is essential for the NLMenu System. However, it is also beneficial for more traditional natural language interfaces. It can increase the perceived speed of any parser since work can proceed as the user is typing and composing his input. Note that a rubout facility can be added by saving the betagamma tuples that result after parsing for each of the words. Such a facility is used by the NLMenu System.The ability to predict the set of possible nth words of a sentence, given the first n-1 words of the sentence is the final modification necessary to enable this parser to be used for menu-based natural language understanding. This feature can be added in a straightforward way. Given any beta-gamma pair representing one of the parse paths active after n-1 words of the sentence have been input, it is possible to determine the set of words that will allow that state to continue. This is by examing the top-most symbol on stack beta of the tuple. It represents the most immediate goal of that parse state. To determine all the words that can come next, given that goal, the set of all nodes that are reachable from that node as a left daughter must be determined. This information is easily obtainable from the reachability matrix discussed earlier. Once the set of reachable nodes is determined, all that need be done is find the subset of these that can dominate lexical material. If this is done for all of the beta-gamma pairs that resulted after parsing the first n-1 words and the union of the sets that result is taken, the resulting set is a list of all of the lexical categories that could come next. The list of next words is easily determined from this.Although a wide class of applications are appropriate for menu-based natural language interfaces, our effort thus far has concentrated on building interfaces to relational databases. This has had several important consequences. First, it has made it easy to compare our interfaces to those that have been built by others because a prime application area for natural language interfaces has been to databases. Second, the process of producing an interface to any arbitrary set of relations has been automated.We have run a series of pilot studies to evaluate the performance of an NLMenu interface to the parts-suppliers database described in Data (1977) . These studies were similar to the ones described in Tennant (1980) that evaluated the PLANES system. Our results were more encouraging than Tennant's. They indicated that both experienced computer users and naive subjects can successfully use a menu-based natural language interface to a database to solve problems. All subjects were successfully able to solve all of their problems.Comments from subjects indicated that although the phrasing of a query might not have been exactly how the subject would have chosen to ask the question in an unconstrained, traditional system, the subjects were not bothered by this and could find the alternative phrasing without any difficulty. One factor that appeared to be important in this was the displaying of the entire set of menus at all times. In cases where it was not clear which item on an active menu would lead to the users desired query, users looked at the inactive menus for hints on how to proceed. Additionally, the existence of a rubout facility that enabled users to rubout phrases they had input as far back as desired encouraged them to explore the system to determine how a sentence might be phrased. There was no penalty for choosing an item which did not allow a user to continue his question in the way he desired. All that the user had to do was rub it out and pick again.The system outlined in this section is a companion system to NLMenu. It allows NLMenu interfaces to an arbitrary set of relations to be constructed in a quick and concise way. Other researchers have examined the problem of constructing portable natural language interfaces. These include Kaplan (1979) , Harris (1979) , Hendrix and Lewis (1981) , and Grosz et. al. (1982) . While the work described here shares similarities, it differs in several ways. Our interface specification dialogue is simple, short, and is supported by the database data dictionary. It is intended for the informed user, not necessarily a database designer and certainly Dot a grammar expert. Information is obtained from this informed user through a menu-based natural language dialogue. Thus, the interface that builds interfaces is extremely easy to use.The system for automatically generating NLMenu interfaces to relational databases is divided into two basic components. One component, BUILD-INTERFACE, produces a domain specific data structure called a "portable spec" by engaging the user in an NLMenu dialog. The other component, MAKE-PORTABLE-INTERFACE, generates a semantic grammar and lexicon from the "portable spec".The MAKEZPORTABLE-INTERFACE component takes as input a "portable spec", uses it to instantiate a domain independent core grammar and lexicon, and returns a semantic grammar and a semantic lexicon pair, which defines an NLMENU interface. The core grammar and lexicon can be small (21 grammar rules and 40 lexical entries at present), but the size of the resulting semantic grammars and lexicons will depend on the portable spec.A portable-spec consists of a list of categories. The categories are as follows. The COVERED TABLES list specifies all relations or views that the interface will cover. The retrieval, insertion, deletion and modification relations specify ACCESS RIGHTS for the covered tables. Non-numeric attributes, CLASSIFY ATTRI-BUTES according to type. Computable attributes are numeric attributes that are averageable, summable, etc. A user may choose not to cover some attributes in interface. IDENTIFYING ATTRI-BUTES are attributes that can be used to identify the rows. Typically, identifying-attributes will include the key attributes, but may include other attributes if they better identify tuples (rows) or may even not include a full key if one seeks to identify sets of rows together. TWO TABLE JOINS specify supported join paths between tables. THREE TABLE JOINS specify supported "relationships" (in the entity-relationship data model sense) where one relation relates 2 others. The EDITED ITEMS specification records old and new values for menu phrases and the window they appear in. The EDITED HELP provides a way for users to add to, modify or replace automatically generated help messages associated with a menu item. Values to these last categories record changes that a user makes to his default menu screen to customize phrasings or help messages for an application.The BUILD-INTERFACES component is a menubased natural language interface and thus is really another application of the NLMenu system to an interface problem. It elicits the information required to build up a "portable spec" from the user. In addition to allowing the user to create an interface, it also allows the user to modify or combine existing interfaces. The user may also grant interfaces to other users, revoke them, or drop them. The database management system controls which users have access to which interfaces. Appendix:
null
null
null
null
{ "paperhash": [ "ross|an_improved_left-corner_parsing_algorithm", "slocum|a_practical_comparison_of_parsing_strategies", "hendrix|transportable_natural-language_interfaces_to_databases", "harris|experience_with_robot_in_12_commercial,_natural_language_data_base_query_applications", "griffiths|letters_to_the_editor:_on_procedures_for_constructing_structural_descriptions_for_three_parsing_algorithms", "griffiths|on_the_relative_efficiencies_of_context-free_grammar", "tennant|evaluation_of_natural_language_processors", "ullman|principles_of_database_systems", "date|an_introduction_to_database_systems" ], "title": [ "An Improved Left-Corner Parsing Algorithm", "A Practical Comparison of Parsing Strategies", "Transportable Natural-Language Interfaces to Databases", "Experience with ROBOT in 12 Commercial, Natural Language Data Base Query Applications", "Letters to the editor: on procedures for constructing structural descriptions for three parsing algorithms", "On the relative efficiencies of context-free grammar", "Evaluation of Natural Language Processors", "Principles of Database Systems", "An Introduction to Database Systems" ], "abstract": [ "This paper proposes a series of modifications to the left corner parsing algorithm for context-free grammars. It is argued that the resulting algorithm is both efficient and flexible and is, therefore, a good choice for the parser used in a natural language interface.", "INTRODUCTION Although the l i terature dealing with formal and natural languages abounds with theoretical arguments of worstcase performance by various parsing strategies [e.g. , Grif f i ths & Petrick, 1965; Aho & Ullman, 1972; Graham, Harrison & Ruzzo, Ig80], there is l i t t l e discussion of comparative performance based on actual practice in understanding natural language. Yet important practical considerations do arise when writ ing programs to understand one aspect or another of natural language utterances. Where, for example, a theorist wi l l characterize a parsing strategy according to i ts space and/or time requirements in attempting to analyze the worst possible input acc3rding to ~n arbi t rary grammar s t r i c t l y l imited in expressive power, the researcher studying Natural Language Processing can be jus t i f ied in concerning himself more with issues of practical performance in parsing sentences encountered in language as humans Actually use i t using a grammar expressed in a form corve~ie: to the human l inguist who is writ ing i t . Moreover, ~ r y occasional poor performance may be quite acceptabl:, part icular ly i f real-time considerations are not invo~ed, e.g., i f a human querant is not waiting for the answer to his question), provided the overall average performance is superior. One example of such a situation is o f f l ine Machine Translation.", "Abstract : Several computer systems have now been constructed that allow users to access databases by posing questions in natural languages, such as English. When used in the restricted domains for which they have been especially designed, these systems have achieved reasonably high levels of performance. However, these systems require the encoding of knowledge about the domain of application in complex data structures that typically can be created for a new database only with considerable effort on the part of a computer professional who has had special training in computational linguistics and the use of databases. This paper describes initial work on a methodology for creating natural-language processing capabilities for new databases without the need for intervention by specially trained experts. The approach is to acquire logical schemata and lexical information through simple interactive dialogues with someone who is familiar with the form and content of the database, but unfamiliar with the technology of natural-language interfaces. A prototype system using this methodology is described and an example transcript is presented.", "The ability to understand Natural Language has long been a goal of Artificial Intelligence Research, and it is still far from being solved, however, in the early 1970's the AI research techniques reached a point whereby certain applications became feasible for the first time. Since that time, several systems such as PLANES[1], LIFER[2], and ROBOT[3,4,5] have been built that have demonstrated that the current state of the art is sufficient for quite good natural language data base query. \n \nThe implementation of the R0301 system has been geared for high performance and instaliaoility in accual real world environments. As such, it offers the AI research community some insight into the difficulties encountered when putting the current: AI technology in the hands of people in the real world. This paper discusses the unexpected linguistic and semantic difficulties encountered in the 12 commercial applications to which ROBOT has been applied during the last year and a half.", "Structural Deseriptions for Three Parsing Algorithms I)ear Editor: This uotc gives l}rocedures for constructi~*g structural desc, rip-tions for each of the three major algorilhm classes described in C, rifEths and Petrick [1]. Tile method of present,:ttion is iiHluel~eed by a paper of [leino Kurki-Suonio [2]. We assume a context free grammarG = (I, 7',S, P) with ¢, t ~, .-. , ~t I U Ttmd aTuriug machine as in [1] but having au output string associated with each instruction. [}(}inL they made was that t,2)lcv~{A>< ('{>ul(t {l() the input and cem-i>utatk)n paris easil.v, but thz~t ('()m)h would b{, much easier to use it] st>{'{'ii'yi~g n l)m'ti{'uhtr i{wmat l{}P {}uil)u{. This may be tPue for th(' lmPt iculm' eomi}il{'rs [}wy w('r{' using, lint ii is llOt ~Pll0 iH geu{'ral. The F()m'tL~x hmguaKe on th{' ('()NTI{()I, I)ATA 3(i00 hlts severttl ['eattlres which rive the user a \\err l}<}weriul data i/lal/i])ll_ lati(}lI cal)ability. The I),(>gram beh)w (t,'igure 1) uses some oF diem to rel>roduce Mr, Shavei/'s <)ut/mt (Figure 2) it~clu(ling the floadng dollar sit, u.", "A number of diverse recognition procedures that have been proposed for parsing sentences with respect to a context-free grammar are described in this paper by means of a common device. Each procedure is defined by giving an algorithm for obtaining a nondeterministic Turing Machine recognizer that is equivalent to a given context-free grammar. The formalization of the Turing Machine has been chosen to make possible particularly simple descriptions of the parsing procedures considered. The class of grammars called context-free (CF) by Chomsky [1] has been utilized in various linguistic theories, both as the sole component and as just one of several components of a natural language grammar. In addition, OF grammars have come 1o play a dominant role in the specification and translation of programming languages. It has been found that CI i' grammars are at least to some extent adequate for specifying the syntax of programming languages and that the structural descriptions assigned by these grammars are of practical utility in producing compilers. The first application of CF grammars to computer programming seems to have been made by Baekus [2] in officially defining the syntax of the ALGOr, language. Except for notation, so-called \"Backus Normal Ii'orm\" is identical to a CF grammar specification. It is not our purpose here to discuss the appropriateness of the CF grammar for specific applieations. Indeed, the authors' basic disagreement here makes a joint statement impossible. Instead we assume the utility of the CF grammar for some applications and confine ourselves to investigating the effieieneies of different recognition procedures for this class of grammars. A C,F grammar is represented by a quadruple (I, T, S, P) where I U .7' is an alphabet and P contains rules for rewriting symbols from [ as strings of symbols from I U T. These rules have the form: A ~Bi..-B~, (n => 1) where the symbol A is rewritten as the string B1 \"\" B,~. The string D1 ... Dq is derivable from the string CC1 ... Cv if D~-. • /Z~ can be obtained from C1-. • Cp by a finite sequence of applications of rules from P. The sets I and T are such that I= {C:C~D~...DqC P} and INT =~. The men'~bers of T are called terminals, the members of I, nonterminals. There is a designated symbol S C I, and terminal strings derivable from S are called sentences of the grammar. We define …", "Despite a large amount of research on developing natural language understanding programs, little work has been done on evaluating their performance or potential. The evaluations that have been done have been unsystematic and incomplete. This has lead to uncertainty and confusion over the accomplishments of natural language processing research. \nThe lack of evaluation can be primarily attributed to the difficulty of the problem. The desired behavior of natural language processors has not been clearly specified. Partial progress toward the eventual goals for natural language processors has not been delineated, much less measured. \nThis thesis attempts to clarify some of the difficulties behind evaluating the performance of natural language processors. It also proposes an evaluation method that is designed to be systematic and thorough. The method relies on considering a natural language processor from three viewpoints in the light of several taxonomies of issues relevant to natural language processing. Finally, an evaluation is described of PLANES, a natural language database query system.", "A large part is a description of relations, their algebra and calculus, and the query languages that have been designed using these concepts. There are explanations of how the theory can be used to design good systems. A description of the optimization of queries in relation-based query languages is provided, and a chapter is devoted to the recently developed protocols for guaranteeing consistency in databases that are operated on by many processes concurrently", "From the Publisher: \nFor over 25 years, C. J. Date's An Introduction to Database Systems has been the authoritative resource for readers interested in gaining insight into and understanding of the principles of database systems. This revision continues to provide a solid grounding in the foundations of database technology and to provide some ideas as to how the field is likely to develop in the future.. \"Readers of this book will gain a strong working knowledge of the overall structure, concepts, and objectives of database systems and will become familiar with the theoretical principles underlying the construction of such systems." ], "authors": [ { "name": [ "Kenneth M. Ross" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Jonathan Slocum" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Hendrix", "W. H. Lewis" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "L. R. Harris" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Timothy V. Griffiths" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Timothy V. Griffiths", "S. R. Petrick" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "H. Tennant" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Ullman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "C. J. Date" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "1182312", "12179283", "9344388", "46349617", "29117882", "5605652", "58487626", "61817775", "227993896" ], "intents": [ [ "methodology", "background" ], [], [ "background" ], [ "background" ], [ "background" ], [ "background" ], [ "result", "background" ], [], [ "result" ] ], "isInfluential": [ false, false, false, false, false, false, false, false, false ] }
Problem: The problem addressed in this paper is the negative and false user expectations that arise when natural language interfaces fail to understand user inputs, leading to limitations in the system's conceptual and linguistic coverage. Solution: The paper proposes the NLMenu System, a menu-based natural language understanding system that allows users to input queries by selecting items from dynamically changing menus. By guiding users through menu selections, the system ensures that all inputs can be understood, resulting in a 0% failure rate and preventing negative user expectations. Additionally, a companion system is discussed that can automatically generate interfaces to relational databases, facilitating relatively straightforward queries that the system can understand.
500
0.224
null
null
null
null
null
null
null
null
11f82062b5b939a9bad16455c0b1bc601bda261c
978468
null
Context-Freeness and the Computer Processing of Human Languages
Context-free grammars, far from having insufficient expressive power for the description of human fan Kuages, may he overly powerful, along three dimensions; (i) weak generative capacity: there exists an interesting proper subset of the CFL's, the profligate CFL's, within which no human language appears to fall; (2) strong generative capacity: human languages can be appropriately described in terms of a proper subset of the CF-PSG's, namely those with the ECPO property; (3) time complexity: the recent controversy about the importance of a low deterministic polynomial time bound on the recognition problem for human languages is misdirected, since an appropriately restrictive theory would guarantee even more, namely a linear bound.
{ "name": [ "Pullum, Geoffrey K." ], "affiliation": [ null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
24
23
null
Many computationally inclined linguists appear to think that in order to achieve adequate gr~----rs for human languages we need a hit more power than is offered by context-free phrase structure grammars (CF-PSG's), though not a whole lot more.In this paper, I am concerned with the defense of a more conservative view: that even CF-PSG's should be regarded as too powerful, in three computationally relevant respects: weak generative capacity, strong generative capacity, and time complexity of recognition.All three of these matters should be of concern to theoretical linguists; the study of what mathematically definable classes human languages fall into does not exhaust scientific linguistics, hut it can hardly he claimed to he irrelevant to it. And it should be obvious that all three issues also have some payoff in terms of certain computationally interesting, if rather indirect, implications.Weak generative capacity (WGC) results are held by some linguists (e.g. Chomsky (1981) ) to be unimportant. Nonetheless, they cannot be ignored by linguists who are interested in setting their work in a context of (even potential) computational implementation (which, of course, some linguists are not). To paraphrase Montague, we might say that linguistically (as opposed to psycholinguistically) there is no important theoretical difference between natural languages and high-level programming languages. Mediating programs (e.g. a compiler or interpreter), of considerable complexity, will be needed for the interpretation of computer input in either Prolog or Japanese.In the latter case the level of complexity will be much higher, but the assumption is that we are talking quantita-tively, not qualitatively.And if we are seriously interested in the computational properties of either kind of language, we will be interested in their language-theoretic properties, as well as properties of the grammars that define them and the parsers that accept them.The most important language-theoretic class considered by designers of programming languages, compilers, etc.is the context-free languages (CFL's). Ginsburg (1980, 7) goes so far as to say on behalf of formal language theorists, "We live or die on the context-free languages.") The class of CFL's is very rich. Although there are simply definable languages well known to be non-CF, linguists often take CFL's to be non-CF in error. Several examples are cited in Pullum and Gazdar (1982) . For another example, see Dowry, Wall and Peters (1980; p.81) , where exercise 3 invites the reader to prove a certain artificial language non-CF. The exercise is impossible, for the language i__% a CFL, as noted by William H. Baxter (personal communication to Gerald Gazdar) .From this point on, it will he useful to be able to refer to certain types of formal language by names.I shall use the terms defined in [i) thru (3), among others.languages that can be mapped by a homomorphism onto some language of the form ~ b n ~1 nZl~(2) String Matching Languages: languages that can be mapped by a homomorphism onto some language of the form {xxlx is in some infinite language A}(3) String Contrasti~ Languages: languages that can be mapped by a homomorphism onto some language of the form {xcy[x and y are in some infinite language A and x ~ y} Programming languages are virtually always designed to be CF, except that there is a moot point concerning the implications of obligatory initial declaration of variables as in ALGOL or Pascal, since if variables (identifiers) can be alphanumeric strings of arbitrary length, a syntactic guarantee that each variable has been declared is tantamount to a syntax for a string matching language. The following view seems a sensible one to take about such cases: languages like ALGOL or Pascal are CF, but not all ALGOL or Pascal programs compile or run. Programs using undeclared variables make no sense either to the compiler or to the CPU. But they are still programs, provided they conform in all other ways to the syntax of the language in question, just as a program which always goes into an infinite loop and thus never gives any output is a program. Aho and Ullmann (1977, 140) take such a view: the syntax of ALGOL...does not get down to the level of characters in a name. Instead, all names are represented by a token such as i d, and it is left to the bookkeeping phase of the compiler to keep track of declarations and uses of particular names.The bookkeeping has Co be done, of course, even in the case of languages like LISP whose syntax does not demand a list of declarations at the start of each program.Various efforts have been made in the linguistic literature to show that some human language has an infinite, appropriately extractable subset that is a triple counting language or a string matching language.(By appropriately extractable I mean isolable via either homomorphism or intersection with a regular set.) But all the published claims of this sort are fallacious (Pullum and Gazdar 1982) . This lends plausibility to the hypothesis that human languages are all CF. Stronger claims than this (e.g. that human languages are regular, or finite cardinality) have seldom seriously defended. I now want to propose one, however.I propose that human languages are never profligate CYL's in the sense given by the following definition. Clearly, only an infinite CPL can be profligate, and clearly the most commonly cited infinite CFL's are not profligate.For instance, {!nbn~n ~ 0} is not profligate, because it has two terminal symbols but there is a grammar for it that has only one nonterminal symbol, namely S. (The rules are:(S --> aSb, S --> e}.) However, profligate CFL's do exist. There are even regular languages that are profligate: a simple example (due to Christopher Culy) is (A* + ~*).More interesting is the fact that some string contrasting languages as defined above are profligate. Consider the string contrasting language over the vocabulary {~, k, K} where A = (A + ~)*. A string xcv in (~ + b)*~(~ + A)* will be in this language if any one of the following is met: (a) ~ is longer than Z; (b) K is shorter than ~; (c) ~ is the same length as ~ but there is an such that the ith symbol of K is distinct from the ith symbol of ~.The interesting Condition here is (c). The grammar has to generate, for all ~ and for all pairs <u, v> of symbols in the terminal vocabulary, all those strings in (a + b)*c(a + b)* such that the ~th symbol is ~ and the ~th symbol after ~ is Z. There is no bound on l, so recursion has tO be involved. But it must be recursion through a category that preserves a record of which symbol is crucially going to be deposited at the ~th position in the terminal string and mismatched with a distinct symbol in the second half. A CF-PSG that does this can be constructed (see Pullum and Gazdar 1982, 478 , for a grammar for a very similar language). But such a grammar has to use recursive nonterminals, one for each terminal, to carry down information about the symbol to be deposited at a certain point in the string.In the language just given there are only two relevant terminal symbols, but if there were a thousand symbols that could appear in the ~ and ~ strings, then the vocabulary of recursive nonterminals would have to be increased in proportion.(The second clause in the definition of profligacy makes it irrelevant whether there are other terminals in the language, like g in the language cited, that do not have to participate in the recursive mechanisms just referred to.)For a profligate CFL, the argument that a CF-PSG is a cumbersome and inelegant form of grammar might well have to be accepted. A CF-PSG offers, in some cases at least, an appallingly inelegant hypothesis as to the proper description of such a language, and would be rejected by any linguist or programmer. The discovery that some human language is profligate would therefore provide (for the first time, I claim) real grounds for a rejection of CF-PSG's on the basis of strong generative capacity (considerations of what structural descriptions are assigned to strings) as opposed to weak (what language is generated).However, no human language has been shown to be a profligate CFL. There is one relevant argument in the literature, found in Chomsky (1963) . The argument is based on the nonidentity of constituents allegedly required in comparative clause constructions like (4). Chomsky took sentences like (5) to be ungrammatical, and thus assumed that the nonidentity between the bracketed phrases in the previous example had to be guaranteed by the grammar. Chomsky took this as an argument for non-CF-ness in English, since he thought all string contrasting languages were non-CF (see Chomsky 1963, 378-379) , but it can be reinterpreted as an attempt to show that English is (at least) profligate.(It could even be reconstituted as a formally valid argument that English was non-CF if supplemented by a demonstration that the class of phrases from which the bracketed sequences are drawn is not only" infinite but non-regular; of. Zwicky and Sadock.) However, the argument clearly collapses on empirical grounds. As pointed out by Pullum and Gazdar (1982, 476-477) , even Chomsky now agrees that strings like (5) are grammatical (though they need a contrastive context and the appropriate intonation to make them readily acceptable to informants). Hence these examples do not show that there is a homomorphism mapping English onto some profligate string contrasting language.The interesting thing about this, if it is correct, is that it suggests that human languages not only never demand the syntactic string comparison required by string matching languages, they never call for syntactic string comparision over infinite sets of strings at all, whether for symbol-by-symbol checking of identity (which typically makes the language non-CF) or for specifying a mismatch between symbols (which may not make the language non-CF, but typically makes it profligate).There is an important point about profligacy that" I should make at this point. My claim that human languages are non-profligate entails that each human language has at least one CF-PSG in which the nonterminal vocabulary has cardinality strictly less than the terminal vocabulary, but not that the best granzaar to implement for it will necessarily meet this condition.The point is important, because the phrase structure grammars employed in natural language processing generally have complex nouterminals consisting of sizeable feature bundles.It is not uncommon for a large natural language processing system to employ thirty . or forty binary features (or a rough equivalent in terms of multi-valued features), i.e. about as many features as are employed for phonological description by Chomsky and Halle (19681. The GPSG system described in Gawron et al. (1982) has employed features on this sort of scale at all points in its development, for example. Thirty or forty binary features yields between a billion and a trillion logically distinguishable nonterminals (if all values for each feature are compatible with all combinations of values for all other features). Because economical techniques for rapid checking of relevant feature values are built into the parsers normally used for such grammars, the size of the potentially available nonterminal vocabulary is not a practical concern.In principle, if the goal of capturing generalizations and reducing the size of the grammar formulation were put aside, the nonterminal vocabulary could be vastly reduced by replacing rule schemata by long lists of distinct rules expanding the same nonterminal.Naturally, no claim has been made here that profligate CFL's are computationally intractable. No CFL's are intractable in the theoretical sense, and intractability in practice is so closely tied to details of particular machines and programming environments as to be pointless to talk about in terms divorced from actual measurements of size for grammars, vocabularies, and address spaces.I have been concerned only to point out that there is an interesting proper subset of the infinite CFL's within which the human languages seem to fall. One further thing may be worth pointing out. The kind of string contrasting languages I have been concerned with above are strictly nondeterministic. The deterministic CFL's (DCFL's) are closed under complementation.But the cor~ I _nt of b. {xcx[x is in (a + b)*} If (7a) [=(Yb)]is non-CF and is the complement of (6), then (6) is not a DCFL.[OPEN PROBLEM: Are there any nonregular profligate DCFL's?]I now turn to a claim involving strong generative capacity (SGC).In addition to claiming that human languages are non-profligate CFL's, I want to suggest that every human language has a linguistically adequate grammar possessing the Exhaustive Constant Partial Ordering (ECPO) property of Gazdar and Pullum (1981) . A grammar has this property if there is a single partial ordering of the nontermihal vocabulary which no right hand side of any rule violates.The ECPO CF-PSG's are a nonempty proper subset of the CF-PSG's.The claim that human languages always have ECPO CF-PSG's is a claim about the strong generative capacity that an appropriate theory of human language should have--one of the first such claims to have been seriously advanced, in fact.It does not affect weak generative capacity; Shieber (1983a) proves that every CFL has an ECPO grammar.It is always possible to construct an ECPO grammar for any CFL if one is willing to pay the price of inventing new nonterminals ad hoc to construct it. The content of the claim lies in the fact that linguists demand independent motivation for the nonterminals they postulate, so that the possibility of creating new ones just to guarantee ECPO-ness is not always a reasonable one.Could there be a non-profligate CFL which had #(N) < #T (i.e. nonterminal vocabulary strictly smaller than terminal vocabulary) for at least one of its non-ECPO grammars, but whose ECPO grammars always had #(N) > #(T)?] When the linguist's criteria of evaluation are kept in mind, it is fairly clear what sort of facts in a human language would convince linguists to abandon the ECPO claim. For example, if English had PP -S" order in verb phrases (explain to him ~a~ he'll have to leave) but had S" -PP order in adjectives (so that lucky for us we found you had the form lucky we found you for us), the grammar of English would not have the ECPO property.But such facts appear not to turn up in the languages we know about.The ECPO claim has interesting consequences relating to patterns of constituent order and how these can be described in a fully general way. If a gr~r has the ECPO property, it can be stated in what Gazdar and Pullum call ID/LP format, and this renders numerous significant generalizations elegantly capturable.There are also some potentially interesting implications for parsing, studied by Shieber (1983a) , who shows that a modified Earley algorithm can be used to parse ID/LP format gr----mrs directly°One putative challenge to any claim that CF-PSG's can be strongly adequate descriptions for human languages comes from Dutch and has been discussed recently by Bresnan, Kaplan, Peters, and Zaenen (1982) . Dutch has constructions like (7) dat Jan Pier Marie zag leren zwemmen that Jan Pier Marie saw teach swim "that Jan saw Pier teach Marie to swim"These seem to involve crossing dependencies over a domain of potentially arbitrary length, a configuration that is syntactically not expressible by a CF-PSG.In the special case where the dependency involves stringwise ~dentity, a language with this sort of structure reduces to something like {xx[~ is in ~*}, a string matching language. However, analysis reveals that, as Bresnan et el. accept, the actual dependencies in Dutch are not syntactic. Grammaticality of a string like (7) is not in general affected by interchanging the NP's with one another, since it does not matter to the ~th verb what the ith NP might he. What is crucial is that (in cases with simple transitive verbs, as above) the ~th predicate (verb) takes the interpretation of the i-lth noun phrase as its argument. Strictly, this does not bear on the issue of SGC in any way that can be explicated without making reference to semantics.What is really at issue is whether a CF-PSG can assign syntactic qtructures to sentences of Dutch in a way that supports semantic interpretation.Certain recent work within the framework of generalized phrase structure gran~mar suggests to me that there is a very strong probability of the answer being yes. One interesting development is to be found in Culy (forthcoming), where it is shown that it is possible for a CFL-inducing syntax in ID/LP format to assign a "flat" constituent structure to strings like Pier Marie za~ leren zwemmen ('saw Pier teach Marie to swim'), and assign them the correct semantics.Ivan Sag, in unpublished work, has developed a different account, in which strings like za~ leren zwemmen ('saw teach to swim') are treated as compound verbs whose semantics is only satisfied if they are provided with the appropriate number of NP sisters. Whereas Culy has the syntax determine the relative numbers of NP's and verbs, Sag is exploring the assumption that this is unnecessary, since the semantic interpretation procedure can carry this descriptive burden. Under this view too, there is nothing about the syntax of Dutch that makes it non-CF, and there is not necessarily anything in the grammar that makes it non-ECPO.Henry Thompson "also discusses the Dutch problem from the GPSG standpoint (in this volume).One other interesting line of work being pursued (at Stanford, like the work of Culy and of Sag) is due to Carl Pollard (Pollard, forthcoming, provides an introduction).Pollard has developed a generalization of context-free grammar which is defined not on trees but on "headed strings", i.e. strings with a mark indicating that one distinguished element of the string is the "head", and which combines constituents not only by concatenation but also by "head wrap". This operation is analogous to Emmon Bach's notion "right (or left) wrap" but not equivalent to it. It involves wrapping a constituent ~ around a constituent B so that the head is to the left (or right) of B and the rest of ~ is to the right (or left) of ~. Pollard has shown that this provides for an elegant syntactic treatment of the Dutch facts.I mention his work because I want to return to make a point about it in the immediately following section.The time complexity of the recognition problem (TCR) for human languages is like WGC questions in being decried as irrelevant by some linguists, but again, it is hardly one that serious computational approaches can legitimately ignore. Gazdar (1981) has recently reminded the linguistic community of this, and has been answered at great length by Berwick and Weinberg (1982) . Gazdar noted that if transformational grammars (TG's) were stripped of all their transformations, they became CFLinducing, which meant that the series of works showing CFL's to have sub-cubic recognition times became relevant to them. gerwick and Weinberg's paper represents a concerted eff6rt to discredit any such suggestion by insisting that (a) it isn't only the CFL's that have low polynomial recognition time results, and (b) it isn't clear that any asymptotic recognition time results have practical implications for human language use (or for computer modelling of it).Both points should be quite uncontroversial, of course, and it is only by dint of inaccurate attribution that Berwick and Weinberg manage to suggest that Gazdar denies them. However, the two points simply do not add up to a reason for not being concerned with TCR results.Perfectly straightforward considerations of theoretical restrictiveness dictate that if the languages recognizable in polynomial time are a proper subset of those recognizable in exponential time (or whatever), it is desirable to explore the hypothesis that the human languages fall within the former class rather than just the latter.Certainly, it is not just CFL's that have been shown to be efficiently recognizable in deterministic time on a Turing machine.Not only every context-free grammar but also every contextsensitive grammar that can actually be exhibited generates a language that can be recognized in deterministic linear time on a two-tape Turing machine.It is certainly not the case that all the context-sensitive languages are linearly recognizable; it can be shown (in a highly indirect way) that there must be some that are not. But all the examples ever constructed generate linearly recognizable languages. And it is still unknown whether there are CFL's not linearly recognizable.It is therefore not at all necessary that a human language should be a CFL in order to be efficiently recognizable.But the claims about recognizability of CFL's do not stop at saying that by good fortune there happens to be a fast recognition algorithm for each member of the class of CFL's. The claim, rather, is that there is ~ single, universal algorithm that works for every member of the class and has a low deterministic polynomial time complexity. That is what cannot be said of the context-sensitive languages.Nonetheless, there are well-understood classes of gr~-m-rs and automata for which it can be said. For example, Pollard, in the course of the work mentioned above, has shown that if one or other of left head wrap and right head wrap is permitted in the theory of generalized context-free grammar, recognizability in deterministic time ~5 is guaranteed, and if both left head wrap and right head wrap are allowed in gr---.-rs (with individual gr-----rs free to have either or both), then in the general case the upper bound for recognition time is ~7o These are, while not sub-cubic, still low deterministic polynomial time bounds. Pollard's system contrasts in this regard with the lexicalfunctional gra~ar advocated by Bresnan etal., which is currently conjectured to have an NPcomplete recognition problem.I remain cautious about welcoming the move that Pollard makes because as yet his non-CFL-inducing syntactic theory does not provide an explanation for the fact that human languages always seem to turn out to be CFL's. It should be pointed out, however, that it is true of every grammatical theory that not every grammar defined as possible is held to be likely to turn up in practice, so it is not inconceivable that the gr-----rs of human languages might fall within the CFL-inducing proper subset of Pollard-style head gra=mars.Of course, another possibility is that it might turn out that some human language ultimately provides evidence of non-CY-ness, and thus of a need for mechanisms at least as powerful as Pollard's. Bresman etal. mention at the end of their paper on Dutch a set of potential candidates: the so called "free word order" or "nonconfigurational" languages, particularly Australian languages like Dyirbal and Walbiri, which can allegedly distribute elements of a phrase at random throughout a sentence in almost any order.I have certain doubts about the interpretation of the empirical material on these languages, but I shall not pursue chat here. I want instead to show that, counter to the naive intuition that wild word order would necessarily lead to gross parsing complexity, even rampantly free word order in a language does not necessarily indicate a parsing problem that exhibits itself in TCR terms.Let us call transposition of adjacent terminal symbols scrambling, and let us refer to the closure of a language ~ under scrambling as the scramble of 2-The scramble of a CFL (even a regular one) can he non-CF. For example, the scramble of the regular language (abe)* is non-CF, although (abc)* itself is regular.(Of course, the scramble of a CFL is not always non-CF. The scramble of a*b*c* is (~, b, !)*, and both are regular, hence CF.) Suppose for the sake of discussion that there is a human language that is closed under scrambling (or has an appropriately extractable infinite subset that is). The example just cited, the scramble of (abc)*, is a fairly clear case of the sort of thing that might be modeled in a human language that was closed under scrambling.Imagine, for example, the case of a language in which each transitive clause had a verb (~), a nominative noun phrase (~), and an accusative noun phrase (~), and free word order permitted the ~, b, and ~ from any number of clauses to occur interspersed in any order throughout the sentence.If we denote the number of ~'s in a string Z by Nx(Z), we can say ~nat the scramble of (abc)* is (8). Attention was first drawn to this sort of language by Bach (1981) , and I shall therefore call it a Bach lan~uaze. What TCR properties does a Bach language have? The one in (8), at least, can be shown to be recognizable in linear time. The proof is rather trivial, since it is just a corollary of a previously known result. Cook (1971) shows that any language that is recognized by a two-way deterministic pushdown stack automaton (2DPDA) is recognizable in linear time on a Turing machine.In the Appendix, I give an informal description of a 2DPDA that will recognize the language in (81. Given this, the proof that (8) is linearly recognizable is trivial.• Thus even if my WGC and SGC conjectures were falsified by discoveries about free word order languages (which I consider that they have not been), there would still be no ground for tolerating theories of grammar and parsing that fail to impose a linear time bound on recognition. And recent work of Shieber (1983b) shows that there are interesting avenues in natural language parsing to be explored using deterministic context-free parsers that do work in linear time.In the light of the above remarks, some of the points made by Berwick and Weinberg look rather peculiar. For example, Berwick and Weinberg argue at length that things are really so complicated in practical implementations that a cubic bound on recognition time might not make much difference; for short sentences a theory that only guarantees an exponential time bound might do just as well. This is, to begin with, a very odd response to be made by defenders of TG when confronted by a theoretically restrictive claim. If someone made the theoretical claim that some problem had the time complexity of the Travelling Salesman problem, and was met by the response that real-life travelling salesmen do not visit very many cities before returning to head office, I think theoretical computer scientists would have a right to be amused. Likewise, it is funny to see practical implementation considerations brought to bear in defending TG against the phrase structure backlash, when (a) no formalized version of modern TG exists, let alone being available for implementation, and (b) large phrase structure grammars.are being implemented on computers and shown to run very fast (see e.g. Slocum 1983, who reports an all-paths, bottom-up parser actually running in linear time using a CF-PSG with 400 rules and i0,000 lexical entries).Berwick and Weinberg seem to imply that data permitting a comparison of CF-PSG with TG are available. This is quite untrue, as far as I know. I therefore find it nothing short of astonishing to find Chomsky (1981, 234) , taking a very similar position, affirming that because the size of the grammar LS a constant factor in TCR calculations, and possibly a large one,The real empirical content of existing results.., may well be that grammars are preferred if they are not too complex in their rule structure.If parsability is a factor in language evolution, we would expect it to prefer "short grammars'---such as transformational gr--~-rs based on the projection principle or the binding theory... TG's based on the "projection principle" and the '~inding theory" have yet to be formulated with sufficient explicitness for it to be determined whether they have a rule structure at all, let alone a simple one, and the existence of parsing algorithms for them, of any sort whatever, has not been demonstrated.The real reason to reject a cubic recognitiontime guarantee as a goal to be attained by syntactic theory construction is not that the quest is pointless, but rather that it is not nearly ambitious enough a goal.Anyone who settles for a cubic TC~ bound may be settling for a theory a lot laxer than it could be.(This accusation would be levellable equally at TG, lexical-functional grammar, Pollard's generalized context-free gr-----r, and generalized phrase structure gr~--,-r as currently conceived.) Closer to what is called for would be a theory that defines human gr-,,,,-rs as some proper subset of the ECPO CF-FSG's that generate infinite, uonprofligate, linear-time recognizable languages.Just as the description of ALGOL-60 in BNF formalism had a galvanizing effect on theoretical computer science (Ginsburg 1980, 6-7) , precise specification of a theory of this sort might sharpen quite considerably our view of the computational issues involved in natural language processing. And it would simultaneously be of considerable linguistic interest, at least for those who accept that we need a sharper theory of natural language than the vaguely-outlined decorative notations for Turing machines that are so often taken for theories in linguistics.
null
null
null
null
Main paper: introduction: Many computationally inclined linguists appear to think that in order to achieve adequate gr~----rs for human languages we need a hit more power than is offered by context-free phrase structure grammars (CF-PSG's), though not a whole lot more.In this paper, I am concerned with the defense of a more conservative view: that even CF-PSG's should be regarded as too powerful, in three computationally relevant respects: weak generative capacity, strong generative capacity, and time complexity of recognition.All three of these matters should be of concern to theoretical linguists; the study of what mathematically definable classes human languages fall into does not exhaust scientific linguistics, hut it can hardly he claimed to he irrelevant to it. And it should be obvious that all three issues also have some payoff in terms of certain computationally interesting, if rather indirect, implications.Weak generative capacity (WGC) results are held by some linguists (e.g. Chomsky (1981) ) to be unimportant. Nonetheless, they cannot be ignored by linguists who are interested in setting their work in a context of (even potential) computational implementation (which, of course, some linguists are not). To paraphrase Montague, we might say that linguistically (as opposed to psycholinguistically) there is no important theoretical difference between natural languages and high-level programming languages. Mediating programs (e.g. a compiler or interpreter), of considerable complexity, will be needed for the interpretation of computer input in either Prolog or Japanese.In the latter case the level of complexity will be much higher, but the assumption is that we are talking quantita-tively, not qualitatively.And if we are seriously interested in the computational properties of either kind of language, we will be interested in their language-theoretic properties, as well as properties of the grammars that define them and the parsers that accept them.The most important language-theoretic class considered by designers of programming languages, compilers, etc.is the context-free languages (CFL's). Ginsburg (1980, 7) goes so far as to say on behalf of formal language theorists, "We live or die on the context-free languages.") The class of CFL's is very rich. Although there are simply definable languages well known to be non-CF, linguists often take CFL's to be non-CF in error. Several examples are cited in Pullum and Gazdar (1982) . For another example, see Dowry, Wall and Peters (1980; p.81) , where exercise 3 invites the reader to prove a certain artificial language non-CF. The exercise is impossible, for the language i__% a CFL, as noted by William H. Baxter (personal communication to Gerald Gazdar) .From this point on, it will he useful to be able to refer to certain types of formal language by names.I shall use the terms defined in [i) thru (3), among others.languages that can be mapped by a homomorphism onto some language of the form ~ b n ~1 nZl~(2) String Matching Languages: languages that can be mapped by a homomorphism onto some language of the form {xxlx is in some infinite language A}(3) String Contrasti~ Languages: languages that can be mapped by a homomorphism onto some language of the form {xcy[x and y are in some infinite language A and x ~ y} Programming languages are virtually always designed to be CF, except that there is a moot point concerning the implications of obligatory initial declaration of variables as in ALGOL or Pascal, since if variables (identifiers) can be alphanumeric strings of arbitrary length, a syntactic guarantee that each variable has been declared is tantamount to a syntax for a string matching language. The following view seems a sensible one to take about such cases: languages like ALGOL or Pascal are CF, but not all ALGOL or Pascal programs compile or run. Programs using undeclared variables make no sense either to the compiler or to the CPU. But they are still programs, provided they conform in all other ways to the syntax of the language in question, just as a program which always goes into an infinite loop and thus never gives any output is a program. Aho and Ullmann (1977, 140) take such a view: the syntax of ALGOL...does not get down to the level of characters in a name. Instead, all names are represented by a token such as i d, and it is left to the bookkeeping phase of the compiler to keep track of declarations and uses of particular names.The bookkeeping has Co be done, of course, even in the case of languages like LISP whose syntax does not demand a list of declarations at the start of each program.Various efforts have been made in the linguistic literature to show that some human language has an infinite, appropriately extractable subset that is a triple counting language or a string matching language.(By appropriately extractable I mean isolable via either homomorphism or intersection with a regular set.) But all the published claims of this sort are fallacious (Pullum and Gazdar 1982) . This lends plausibility to the hypothesis that human languages are all CF. Stronger claims than this (e.g. that human languages are regular, or finite cardinality) have seldom seriously defended. I now want to propose one, however.I propose that human languages are never profligate CYL's in the sense given by the following definition. Clearly, only an infinite CPL can be profligate, and clearly the most commonly cited infinite CFL's are not profligate.For instance, {!nbn~n ~ 0} is not profligate, because it has two terminal symbols but there is a grammar for it that has only one nonterminal symbol, namely S. (The rules are:(S --> aSb, S --> e}.) However, profligate CFL's do exist. There are even regular languages that are profligate: a simple example (due to Christopher Culy) is (A* + ~*).More interesting is the fact that some string contrasting languages as defined above are profligate. Consider the string contrasting language over the vocabulary {~, k, K} where A = (A + ~)*. A string xcv in (~ + b)*~(~ + A)* will be in this language if any one of the following is met: (a) ~ is longer than Z; (b) K is shorter than ~; (c) ~ is the same length as ~ but there is an such that the ith symbol of K is distinct from the ith symbol of ~.The interesting Condition here is (c). The grammar has to generate, for all ~ and for all pairs <u, v> of symbols in the terminal vocabulary, all those strings in (a + b)*c(a + b)* such that the ~th symbol is ~ and the ~th symbol after ~ is Z. There is no bound on l, so recursion has tO be involved. But it must be recursion through a category that preserves a record of which symbol is crucially going to be deposited at the ~th position in the terminal string and mismatched with a distinct symbol in the second half. A CF-PSG that does this can be constructed (see Pullum and Gazdar 1982, 478 , for a grammar for a very similar language). But such a grammar has to use recursive nonterminals, one for each terminal, to carry down information about the symbol to be deposited at a certain point in the string.In the language just given there are only two relevant terminal symbols, but if there were a thousand symbols that could appear in the ~ and ~ strings, then the vocabulary of recursive nonterminals would have to be increased in proportion.(The second clause in the definition of profligacy makes it irrelevant whether there are other terminals in the language, like g in the language cited, that do not have to participate in the recursive mechanisms just referred to.)For a profligate CFL, the argument that a CF-PSG is a cumbersome and inelegant form of grammar might well have to be accepted. A CF-PSG offers, in some cases at least, an appallingly inelegant hypothesis as to the proper description of such a language, and would be rejected by any linguist or programmer. The discovery that some human language is profligate would therefore provide (for the first time, I claim) real grounds for a rejection of CF-PSG's on the basis of strong generative capacity (considerations of what structural descriptions are assigned to strings) as opposed to weak (what language is generated).However, no human language has been shown to be a profligate CFL. There is one relevant argument in the literature, found in Chomsky (1963) . The argument is based on the nonidentity of constituents allegedly required in comparative clause constructions like (4). Chomsky took sentences like (5) to be ungrammatical, and thus assumed that the nonidentity between the bracketed phrases in the previous example had to be guaranteed by the grammar. Chomsky took this as an argument for non-CF-ness in English, since he thought all string contrasting languages were non-CF (see Chomsky 1963, 378-379) , but it can be reinterpreted as an attempt to show that English is (at least) profligate.(It could even be reconstituted as a formally valid argument that English was non-CF if supplemented by a demonstration that the class of phrases from which the bracketed sequences are drawn is not only" infinite but non-regular; of. Zwicky and Sadock.) However, the argument clearly collapses on empirical grounds. As pointed out by Pullum and Gazdar (1982, 476-477) , even Chomsky now agrees that strings like (5) are grammatical (though they need a contrastive context and the appropriate intonation to make them readily acceptable to informants). Hence these examples do not show that there is a homomorphism mapping English onto some profligate string contrasting language.The interesting thing about this, if it is correct, is that it suggests that human languages not only never demand the syntactic string comparison required by string matching languages, they never call for syntactic string comparision over infinite sets of strings at all, whether for symbol-by-symbol checking of identity (which typically makes the language non-CF) or for specifying a mismatch between symbols (which may not make the language non-CF, but typically makes it profligate).There is an important point about profligacy that" I should make at this point. My claim that human languages are non-profligate entails that each human language has at least one CF-PSG in which the nonterminal vocabulary has cardinality strictly less than the terminal vocabulary, but not that the best granzaar to implement for it will necessarily meet this condition.The point is important, because the phrase structure grammars employed in natural language processing generally have complex nouterminals consisting of sizeable feature bundles.It is not uncommon for a large natural language processing system to employ thirty . or forty binary features (or a rough equivalent in terms of multi-valued features), i.e. about as many features as are employed for phonological description by Chomsky and Halle (19681. The GPSG system described in Gawron et al. (1982) has employed features on this sort of scale at all points in its development, for example. Thirty or forty binary features yields between a billion and a trillion logically distinguishable nonterminals (if all values for each feature are compatible with all combinations of values for all other features). Because economical techniques for rapid checking of relevant feature values are built into the parsers normally used for such grammars, the size of the potentially available nonterminal vocabulary is not a practical concern.In principle, if the goal of capturing generalizations and reducing the size of the grammar formulation were put aside, the nonterminal vocabulary could be vastly reduced by replacing rule schemata by long lists of distinct rules expanding the same nonterminal.Naturally, no claim has been made here that profligate CFL's are computationally intractable. No CFL's are intractable in the theoretical sense, and intractability in practice is so closely tied to details of particular machines and programming environments as to be pointless to talk about in terms divorced from actual measurements of size for grammars, vocabularies, and address spaces.I have been concerned only to point out that there is an interesting proper subset of the infinite CFL's within which the human languages seem to fall. One further thing may be worth pointing out. The kind of string contrasting languages I have been concerned with above are strictly nondeterministic. The deterministic CFL's (DCFL's) are closed under complementation.But the cor~ I _nt of b. {xcx[x is in (a + b)*} If (7a) [=(Yb)]is non-CF and is the complement of (6), then (6) is not a DCFL.[OPEN PROBLEM: Are there any nonregular profligate DCFL's?] strong generative capacity: I now turn to a claim involving strong generative capacity (SGC).In addition to claiming that human languages are non-profligate CFL's, I want to suggest that every human language has a linguistically adequate grammar possessing the Exhaustive Constant Partial Ordering (ECPO) property of Gazdar and Pullum (1981) . A grammar has this property if there is a single partial ordering of the nontermihal vocabulary which no right hand side of any rule violates.The ECPO CF-PSG's are a nonempty proper subset of the CF-PSG's.The claim that human languages always have ECPO CF-PSG's is a claim about the strong generative capacity that an appropriate theory of human language should have--one of the first such claims to have been seriously advanced, in fact.It does not affect weak generative capacity; Shieber (1983a) proves that every CFL has an ECPO grammar.It is always possible to construct an ECPO grammar for any CFL if one is willing to pay the price of inventing new nonterminals ad hoc to construct it. The content of the claim lies in the fact that linguists demand independent motivation for the nonterminals they postulate, so that the possibility of creating new ones just to guarantee ECPO-ness is not always a reasonable one.Could there be a non-profligate CFL which had #(N) < #T (i.e. nonterminal vocabulary strictly smaller than terminal vocabulary) for at least one of its non-ECPO grammars, but whose ECPO grammars always had #(N) > #(T)?] When the linguist's criteria of evaluation are kept in mind, it is fairly clear what sort of facts in a human language would convince linguists to abandon the ECPO claim. For example, if English had PP -S" order in verb phrases (explain to him ~a~ he'll have to leave) but had S" -PP order in adjectives (so that lucky for us we found you had the form lucky we found you for us), the grammar of English would not have the ECPO property.But such facts appear not to turn up in the languages we know about.The ECPO claim has interesting consequences relating to patterns of constituent order and how these can be described in a fully general way. If a gr~r has the ECPO property, it can be stated in what Gazdar and Pullum call ID/LP format, and this renders numerous significant generalizations elegantly capturable.There are also some potentially interesting implications for parsing, studied by Shieber (1983a) , who shows that a modified Earley algorithm can be used to parse ID/LP format gr----mrs directly°One putative challenge to any claim that CF-PSG's can be strongly adequate descriptions for human languages comes from Dutch and has been discussed recently by Bresnan, Kaplan, Peters, and Zaenen (1982) . Dutch has constructions like (7) dat Jan Pier Marie zag leren zwemmen that Jan Pier Marie saw teach swim "that Jan saw Pier teach Marie to swim"These seem to involve crossing dependencies over a domain of potentially arbitrary length, a configuration that is syntactically not expressible by a CF-PSG.In the special case where the dependency involves stringwise ~dentity, a language with this sort of structure reduces to something like {xx[~ is in ~*}, a string matching language. However, analysis reveals that, as Bresnan et el. accept, the actual dependencies in Dutch are not syntactic. Grammaticality of a string like (7) is not in general affected by interchanging the NP's with one another, since it does not matter to the ~th verb what the ith NP might he. What is crucial is that (in cases with simple transitive verbs, as above) the ~th predicate (verb) takes the interpretation of the i-lth noun phrase as its argument. Strictly, this does not bear on the issue of SGC in any way that can be explicated without making reference to semantics.What is really at issue is whether a CF-PSG can assign syntactic qtructures to sentences of Dutch in a way that supports semantic interpretation.Certain recent work within the framework of generalized phrase structure gran~mar suggests to me that there is a very strong probability of the answer being yes. One interesting development is to be found in Culy (forthcoming), where it is shown that it is possible for a CFL-inducing syntax in ID/LP format to assign a "flat" constituent structure to strings like Pier Marie za~ leren zwemmen ('saw Pier teach Marie to swim'), and assign them the correct semantics.Ivan Sag, in unpublished work, has developed a different account, in which strings like za~ leren zwemmen ('saw teach to swim') are treated as compound verbs whose semantics is only satisfied if they are provided with the appropriate number of NP sisters. Whereas Culy has the syntax determine the relative numbers of NP's and verbs, Sag is exploring the assumption that this is unnecessary, since the semantic interpretation procedure can carry this descriptive burden. Under this view too, there is nothing about the syntax of Dutch that makes it non-CF, and there is not necessarily anything in the grammar that makes it non-ECPO.Henry Thompson "also discusses the Dutch problem from the GPSG standpoint (in this volume).One other interesting line of work being pursued (at Stanford, like the work of Culy and of Sag) is due to Carl Pollard (Pollard, forthcoming, provides an introduction).Pollard has developed a generalization of context-free grammar which is defined not on trees but on "headed strings", i.e. strings with a mark indicating that one distinguished element of the string is the "head", and which combines constituents not only by concatenation but also by "head wrap". This operation is analogous to Emmon Bach's notion "right (or left) wrap" but not equivalent to it. It involves wrapping a constituent ~ around a constituent B so that the head is to the left (or right) of B and the rest of ~ is to the right (or left) of ~. Pollard has shown that this provides for an elegant syntactic treatment of the Dutch facts.I mention his work because I want to return to make a point about it in the immediately following section. time complexity of recognition: The time complexity of the recognition problem (TCR) for human languages is like WGC questions in being decried as irrelevant by some linguists, but again, it is hardly one that serious computational approaches can legitimately ignore. Gazdar (1981) has recently reminded the linguistic community of this, and has been answered at great length by Berwick and Weinberg (1982) . Gazdar noted that if transformational grammars (TG's) were stripped of all their transformations, they became CFLinducing, which meant that the series of works showing CFL's to have sub-cubic recognition times became relevant to them. gerwick and Weinberg's paper represents a concerted eff6rt to discredit any such suggestion by insisting that (a) it isn't only the CFL's that have low polynomial recognition time results, and (b) it isn't clear that any asymptotic recognition time results have practical implications for human language use (or for computer modelling of it).Both points should be quite uncontroversial, of course, and it is only by dint of inaccurate attribution that Berwick and Weinberg manage to suggest that Gazdar denies them. However, the two points simply do not add up to a reason for not being concerned with TCR results.Perfectly straightforward considerations of theoretical restrictiveness dictate that if the languages recognizable in polynomial time are a proper subset of those recognizable in exponential time (or whatever), it is desirable to explore the hypothesis that the human languages fall within the former class rather than just the latter.Certainly, it is not just CFL's that have been shown to be efficiently recognizable in deterministic time on a Turing machine.Not only every context-free grammar but also every contextsensitive grammar that can actually be exhibited generates a language that can be recognized in deterministic linear time on a two-tape Turing machine.It is certainly not the case that all the context-sensitive languages are linearly recognizable; it can be shown (in a highly indirect way) that there must be some that are not. But all the examples ever constructed generate linearly recognizable languages. And it is still unknown whether there are CFL's not linearly recognizable.It is therefore not at all necessary that a human language should be a CFL in order to be efficiently recognizable.But the claims about recognizability of CFL's do not stop at saying that by good fortune there happens to be a fast recognition algorithm for each member of the class of CFL's. The claim, rather, is that there is ~ single, universal algorithm that works for every member of the class and has a low deterministic polynomial time complexity. That is what cannot be said of the context-sensitive languages.Nonetheless, there are well-understood classes of gr~-m-rs and automata for which it can be said. For example, Pollard, in the course of the work mentioned above, has shown that if one or other of left head wrap and right head wrap is permitted in the theory of generalized context-free grammar, recognizability in deterministic time ~5 is guaranteed, and if both left head wrap and right head wrap are allowed in gr---.-rs (with individual gr-----rs free to have either or both), then in the general case the upper bound for recognition time is ~7o These are, while not sub-cubic, still low deterministic polynomial time bounds. Pollard's system contrasts in this regard with the lexicalfunctional gra~ar advocated by Bresnan etal., which is currently conjectured to have an NPcomplete recognition problem.I remain cautious about welcoming the move that Pollard makes because as yet his non-CFL-inducing syntactic theory does not provide an explanation for the fact that human languages always seem to turn out to be CFL's. It should be pointed out, however, that it is true of every grammatical theory that not every grammar defined as possible is held to be likely to turn up in practice, so it is not inconceivable that the gr-----rs of human languages might fall within the CFL-inducing proper subset of Pollard-style head gra=mars.Of course, another possibility is that it might turn out that some human language ultimately provides evidence of non-CY-ness, and thus of a need for mechanisms at least as powerful as Pollard's. Bresman etal. mention at the end of their paper on Dutch a set of potential candidates: the so called "free word order" or "nonconfigurational" languages, particularly Australian languages like Dyirbal and Walbiri, which can allegedly distribute elements of a phrase at random throughout a sentence in almost any order.I have certain doubts about the interpretation of the empirical material on these languages, but I shall not pursue chat here. I want instead to show that, counter to the naive intuition that wild word order would necessarily lead to gross parsing complexity, even rampantly free word order in a language does not necessarily indicate a parsing problem that exhibits itself in TCR terms.Let us call transposition of adjacent terminal symbols scrambling, and let us refer to the closure of a language ~ under scrambling as the scramble of 2-The scramble of a CFL (even a regular one) can he non-CF. For example, the scramble of the regular language (abe)* is non-CF, although (abc)* itself is regular.(Of course, the scramble of a CFL is not always non-CF. The scramble of a*b*c* is (~, b, !)*, and both are regular, hence CF.) Suppose for the sake of discussion that there is a human language that is closed under scrambling (or has an appropriately extractable infinite subset that is). The example just cited, the scramble of (abc)*, is a fairly clear case of the sort of thing that might be modeled in a human language that was closed under scrambling.Imagine, for example, the case of a language in which each transitive clause had a verb (~), a nominative noun phrase (~), and an accusative noun phrase (~), and free word order permitted the ~, b, and ~ from any number of clauses to occur interspersed in any order throughout the sentence.If we denote the number of ~'s in a string Z by Nx(Z), we can say ~nat the scramble of (abc)* is (8). Attention was first drawn to this sort of language by Bach (1981) , and I shall therefore call it a Bach lan~uaze. What TCR properties does a Bach language have? The one in (8), at least, can be shown to be recognizable in linear time. The proof is rather trivial, since it is just a corollary of a previously known result. Cook (1971) shows that any language that is recognized by a two-way deterministic pushdown stack automaton (2DPDA) is recognizable in linear time on a Turing machine.In the Appendix, I give an informal description of a 2DPDA that will recognize the language in (81. Given this, the proof that (8) is linearly recognizable is trivial.• Thus even if my WGC and SGC conjectures were falsified by discoveries about free word order languages (which I consider that they have not been), there would still be no ground for tolerating theories of grammar and parsing that fail to impose a linear time bound on recognition. And recent work of Shieber (1983b) shows that there are interesting avenues in natural language parsing to be explored using deterministic context-free parsers that do work in linear time.In the light of the above remarks, some of the points made by Berwick and Weinberg look rather peculiar. For example, Berwick and Weinberg argue at length that things are really so complicated in practical implementations that a cubic bound on recognition time might not make much difference; for short sentences a theory that only guarantees an exponential time bound might do just as well. This is, to begin with, a very odd response to be made by defenders of TG when confronted by a theoretically restrictive claim. If someone made the theoretical claim that some problem had the time complexity of the Travelling Salesman problem, and was met by the response that real-life travelling salesmen do not visit very many cities before returning to head office, I think theoretical computer scientists would have a right to be amused. Likewise, it is funny to see practical implementation considerations brought to bear in defending TG against the phrase structure backlash, when (a) no formalized version of modern TG exists, let alone being available for implementation, and (b) large phrase structure grammars.are being implemented on computers and shown to run very fast (see e.g. Slocum 1983, who reports an all-paths, bottom-up parser actually running in linear time using a CF-PSG with 400 rules and i0,000 lexical entries).Berwick and Weinberg seem to imply that data permitting a comparison of CF-PSG with TG are available. This is quite untrue, as far as I know. I therefore find it nothing short of astonishing to find Chomsky (1981, 234) , taking a very similar position, affirming that because the size of the grammar LS a constant factor in TCR calculations, and possibly a large one,The real empirical content of existing results.., may well be that grammars are preferred if they are not too complex in their rule structure.If parsability is a factor in language evolution, we would expect it to prefer "short grammars'---such as transformational gr--~-rs based on the projection principle or the binding theory... TG's based on the "projection principle" and the '~inding theory" have yet to be formulated with sufficient explicitness for it to be determined whether they have a rule structure at all, let alone a simple one, and the existence of parsing algorithms for them, of any sort whatever, has not been demonstrated.The real reason to reject a cubic recognitiontime guarantee as a goal to be attained by syntactic theory construction is not that the quest is pointless, but rather that it is not nearly ambitious enough a goal.Anyone who settles for a cubic TC~ bound may be settling for a theory a lot laxer than it could be.(This accusation would be levellable equally at TG, lexical-functional grammar, Pollard's generalized context-free gr-----r, and generalized phrase structure gr~--,-r as currently conceived.) Closer to what is called for would be a theory that defines human gr-,,,,-rs as some proper subset of the ECPO CF-FSG's that generate infinite, uonprofligate, linear-time recognizable languages.Just as the description of ALGOL-60 in BNF formalism had a galvanizing effect on theoretical computer science (Ginsburg 1980, 6-7) , precise specification of a theory of this sort might sharpen quite considerably our view of the computational issues involved in natural language processing. And it would simultaneously be of considerable linguistic interest, at least for those who accept that we need a sharper theory of natural language than the vaguely-outlined decorative notations for Turing machines that are so often taken for theories in linguistics. Appendix:
null
null
null
null
{ "paperhash": [ "shieber|sentence_disambiguation_by_a_shift-reduce_parsing_technique", "slocum|a_status_report_on_the_lrc_machine", "chomsky|knowledge_of_language:_its_elements_and_origins" ], "title": [ "Sentence Disambiguation by a Shift-Reduce Parsing Technique", "A Status Report on the LRC Machine", "Knowledge of language: its elements and origins" ], "abstract": [ "Native speakers of English show definite and consistent preferences for certain readings of syntactically ambiguous sentences. A user of a natural-language-processing system would naturally expect it to reflect the same preferences. Thus, such systems must model in some way the linguistic performance as well as the linguistic competence of the native speaker. We have developed a parsing algorithm--a variant of the LALR(I) shift-reduce algorithm--that models the preference behavior of native speakers for a range of syntactic preference phenomena reported in the psycholinguistic literature, including the recent data on lexical preferences. The algorithm yields the preferred parse deterministically, without building multiple parse trees and choosing among them. As a side effect, it displays appropriate behavior in processing the much discussed garden-path sentences. The parsing algorithm has been implemented and has confirmed the feasibility of our approach to the modeling of these phenomena.", "This paper discusses the linguistic and computational techniques employed in the current version of machine Translation system being developed at the Linguistics Research Center of the University of Texas, under contract to Siemens AG in Munich, West Germany. We pay particular attention to the reasons for our choice of certain techniques over other candidates, based on both objective and subjective criteria. We then report the system's status vis-a-vis its readiness for application in a production environment, as a means of justifying our claims regarding the practical utility of the methods we espouse.", "My approach to the study of language is based on the assumption that knowledge of language can be properly characterized by means of a generative grammar, i.e. a system of rules and principles that assigns structural descriptions to linguistic expressions. On this view, the basic concepts are those of ‘grammar’ and ‘knowledge of grammar’. The concepts of language’ and ‘knowledge of language’ are derivative: they involve a higher level of abstraction from psychological mechanisms and raise additional (though not necessarily important) problems. Of central concern, from this point of view, will be to determine the biological endowment that makes it possible for a grammar of the required sort to develop in human beings provided that they are exposed to some appropriate body of experience. This biological endowment may be regarded as a function that maps a body of experience into a particular grammar. The function itself is commonly referred to as universal grammar (u.g.) and can be expressed, in part, as a system of principles that determine the class of accessible particular grammars and their properties. Recent work suggests that u.g. consists, on the one hand, of a theory of so-called core grammar and, on the other, of a theory of permissible extensions and modifications of core grammar. Given the intricate internal structure of u.g., it can account for the superficially highly diverse grammars and languages that do in fact exist. Thus, what appear to be quite different systems of knowledge may arise from relatively little experience. A number of subsystems of u.g. have now been explored, each with its distinctive properties and possibilities of variation. Some current proposals concerning these systems are sketched, and some consequences considered with regard to the nature and acquisition of cognitive systems (including systems of knowledge) more generally." ], "authors": [ { "name": [ "Stuart M. Shieber" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Jonathan Slocum" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Noam Chomsky" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null ], "s2_corpus_id": [ "215514040", "7066904", "84708174" ], "intents": [ [ "background" ], [], [ "background" ] ], "isInfluential": [ false, false, false ] }
Problem: Investigating the expressive power of context-free grammars in describing human languages. Solution: The hypothesis proposes that context-free grammars may be overly powerful in describing human languages, considering weak generative capacity, strong generative capacity, and time complexity of recognition as computationally relevant dimensions.
500
0.046
null
null
null
null
null
null
null
null
b78b881dec3334abf5b8c6390c40a2d93a177294
215514040
null
Sentence Disambiguation by a Shift-Reduce Parsing Technique
Native speakers of English show definite and consistent preferences for certain readings of syntactically ambiguous sentences. A user of a natural-language-processing system would naturally expect it to reflect the same preferences. Thus, such systems must model in some way the linguistic performance as well as the linguistic competence of the native speaker. We have developed a parsing algorithm--a variant of the LALR(I} shift.-reduce algorithm--that models the preference behavior of native speakers for a range of syntactic preference phenomena reported in the psycholinguistic literature, including the recent data on lexical preferences. The algorithm yields the preferred parse deterministically, without building multiple parse trees and choosing among them. As a side effect, it displays appropriate behavior in processing the much discussed garden-path sentences. The parsing algorithm has been implemented and has confirmed the feasibility of our approach to the modeling of these phenomena.
{ "name": [ "Shieber, Stuart M." ], "affiliation": [ null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
12
90
null
For natural language processing systems to be useful, they must assign the same interpretation to a given sentence that a native speaker would, since that is precisely the behavior users will expect.. Consider, for example, the case of ambiguous sentences. Native speakers of English show definite and consistent preferences for certain readings of syntactically ambiguous sentences [Kimball, 1973 , Frazier and Fodor, 1978 , Ford et aL, 1982 . A user of a natural-language-processing system would naturally expect, it to reflect the same preferences. Thus, such systems must model in some way the lineuistie performance as well as the linguistic competence of the native speaker. This idea is certainly not new in the artificial-intelligence literature. The pioneering work of Marcus [Marcus, 1980] is perhaps the best. known example of linguistic-performance modeling in AI. Starting from the hypothesis that ~deterministic" parsing of English is possible, he demonstrated that certain performance "This research was supported by the Defense Advanced Research Proiects Agency under Contract NOOO39-80-C-0575 with the Naval Electronic Systems Command. The views and conclusions contained in this document are those of the author and should not be interpreted a.s representative of the oh~cial policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the United States government.constraints, e.g., the difl]culty of parsing garden-path sentences, could be modeled. His claim about deterministic parsing was quite strong. Not only was the behavior of the parser required to be deterministic, but, as Marcus claimed,The interpreter cannot use some general rule to take a nondeterministic grammar specification and impose arbitrary constraints to convert it to a deterministic specification {unless, of course, there is a general rule which will always lead to the correct decision in such a case). [Marcus, 1980, p.14] We have developed and implemented a parsing system that. given a nondeterministic grammar, forces disambiguation in just the manner Marcus rejected (i.e. t .hrough general rules}; it thereby exhibits the same preference behavior that psycbolinguists have attributed to native speakers of English for a certain range of ambiguities. These include structural ambiguities [Frazier and Fodor, 1978 , Frazier and Fodor, 1980 and lexical preferences [Ford et aL, 1982l , as well as the gardenpath sentences as a side effect. The parsing system is based on the shih.-reduee scheduling technique of Pereira [forthcoming] .Our parsing algorithm is a slight variant of LALR{ 1) parsing, and, as such, exhibits the three conditions postulated by Marcus for a deterministic mechanism: it is data-driven, reflects expectations, and has look-ahead. Like Marcus's parser, our parsing system is deterministic. Unlike Marcus's parser, the grammars used by our parser can be ambiguous.
The parsing system was designed to manifest preferences among ,~tructurally distinct parses of ambiguous sentences. It, does this by building just one parse tree--rather than building multiple parse trees and choosing among them. Like the Marcus parsing system, ours does not do disambiguation requiring "extensive semantic processing," hut, in contrast to Marcus, it does handle such phenomena as PP-attachment insofar as there exist a priori preferences for one attachment over another. By a priori we mean preferences that are exhibited in contexts where pragmatic or plausibility considerations do not tend to favor one reading over the other. Rather than make such value judgments ourselves, we defer to the psycholinguistic literature {specifically [Frazier and Fodor, 1978] , [Frazier and Fodor, 1980] and [Ford et al., 1982] ) for our examples. in which "for Susan* modifies %he book" rather than "bought. " Frazier and Fodor [1978] note that these are canes in which the higher attachment includes fewer nodes in the parse tree. Ore" analysis is somewhat different. Ford et al. [10821 present evidence that attachment preferences depend on lexical choice. Thus, the preferred reading forThe woman wanted the dresm on that rock.has low attachment of the PP, whereasThe tnoman positioned the dreu on that rack.has high attachment.The horse raced pamt the barn fell.seem actually to receive no parse by the native speaker until some sort of "conscioun parsing" is done. Following Marcus [Marcus, 1980] , we take this to be a hard failure of the human sentence-processing mechanism.It will be seen that all these phenomena axe handled in oux parser by the same general rules. The simple context-free grammar used t (see Appendix I) allows both parses of the ambiguous sentences as well as one for the garden-path sentences. The parser disambiguates the grammar and yields only the preferred structure. The actual output of the parsing system can be found in Appendix II.
The parsing system we use is a shift-reduce purser. Shiftreduce parsers [Aho and Johnson, 19741 the parse and a shift-reduce table for guiding the parse, At each step in the parse, the table is used for deciding between two basic types of operations: the shift operation, which adds the next word in the sentence (with its pretcrminal category) to the top of the stack, and the reduce operation, which removes several elements from the top of the stack and replaces them with a new element--for instance, removing an NP and a VP from the top of the stack and replacing them with an S. The state of the parser is also updated in accordance with the shift-reduce table at each stage. The combination of the stack, input, and state of the parser will be called a configuration and will be notated as, for example, 1 NPv IIMar, 110 1where the stack contains the nonterminals NP and V, the input contains the lexical item Mary and the parser is in state 10.By way of example, we demonstrate the operation of the parser (using the grammar of Appendix I) on the oft-cited sentence "John loves Mary. ~ Initially the stack is empty and no input has been consumed. The parser begins in state 0.As elements are shifted to the stack, they axe replaced by their preterminal category." T.he shiR-reduce table for the grammar of Appendix I states that in state 0, with a proper noun as the next word in the input, the appropriate action is a shift. the input has been consumed and an S derived. Thus the sentence is grammatical ia the grammar of Appendix I, as expected.The shift-reduce table mentioned above is generated automatically from a context-free grammar by the standard algorithm [Aho and Johnson, 1974] . The parsing alogrithm differs, however, from the standard LALR(1) parsing algorithm in two ways. First, instead of assigning preterminal symbols to words as they are shifted, the algorithm allows the assignment to be delayed if the word is ambiguous among preterminals. When the word is used in a reduction, the appropriate preterminal is assigned.Second, and most importantly, since true LR parsers exist only for unambiguous grammars, the normal algorithm for deriving LALR(1) shift-reduce tables yields a table that may specify conflicting actions under certain configurations. It is through the choice made from the options in a conflict that the preference behavior we desire is engendered.One key advantage of shift-reduce parsing that is critical in our system is the fact that decisions about the structure to be assigned to a phrase are postponed as long as possible. In keeping with this general principle, we extend the algorithm to allow the ~ssignment of a preterminal category to a lexical item to be deferred until a decision is forced upon it, so to speak, by aa encompassing reduction. For instance, we would not want to decide on the preterminal category of the word "that," which can serve as either a determiner (DET) or complementizer (THAT), until some further information is available. Consider the sentences That problem i* important.That problema are difficult to naive ia important.Instead of a.~signiag a preterminal to ~that," we leave open the possibility of assigning either DET or THAT until the first reduction that involves the word. In the first case, this reduction will be by the rule NP ~DET NOM, thus forcing, once and for all, the assignment of DET as preterminal. In the second ease, the DET NOM analysis is disallowed oa the basis of number agreement, so that the first applicable reduction is the COMPS reduction to S, forcing the assignment of THAT as preterminal.Of course, the question arises as to what state the parser goes into after shitting the lexical item ~that." The answer is quite straightforward, though its interpretation t,i~ d t,,a the determinism hypothesis is subtle. The simple answer is that the parser enters into a state corresponding to the union of the states entered upon shifting a DET and upon shifting a THAT respectively, in much the same way as the deterministic simulation of a nondeterministic finite automaton enters a ~uniou" state when faced with a nondeterministic choice. Are we then merely simulating a aoadeterministic machine here. ~ The anss~er is equivocal. Although the implementation acts as a simulator for a nondeterministic machine, the nondeterminism is a priori bounded, given a particular grammar and lexicon. 3 Thus. the nondeterminism could be traded in for a larger, albeit still finite, set of states, unlike the nondeterminism found in other parsing algorithms. Another way of looking at the situation is to note that there is no observable property of the algorithm that would distinguish the operation of the parser from a deterministic one. In some sense, there is no interesting difference between the limited nondeterminism of this parser, and Marcus's notion of strict determinism. In fact, the implementation of Marcus's parser also embodies a bounded nondeterminism in much the same way this parser does.The differentiating property between this parser and that of Marcus is a slightly different one, namely, the property of qaaM-real-time operation. 4 By quasi-real-time operation, Marcus means that there exists a maximum interval of parser operation for which no output can be generated. If the parser operates for longer than this, it must generate some output. For instance, the parser might be guaranteed to produce output (i.e., structure) at least every three words. However, because preterminal assignment can be delayed indefinitely in pathological grammars, there may exist sentences in such grammars for which arbitrary numbers of words need to be read before output can be produced. It is not clear whether this is a real disadvantage or not, and, if so, whether there are simple adjustments to the algorithm that would result in quasi-real-time behavior. In fact, it is a property of bottom-up parsing in general that quasi-real-time behavior is not guaranteed. Our parser has a less restrictive but similar property, fairneaH, that is, our parser generates output linear in the input, though there is no constant over which output is guaranteed. For a fuller discussion of these properties, see Pereira and Shieber [forthcoming].To summarize, preterminal delaying, as an intrinsic part of the algorithm, does not actually change the basic properties of the algorithm in any observable way. Note, however, that preterminal assignments, like reductions, are irrevocable once they are made {as a byproduct of the determinism of the algo-rithm}. Such decisions can therefore lead to garden paths, as they do for the sentences presented in Section 3.6.We now discuss the central feature of the algorithm. namely, the resolution of shift-reduce conflicts.Conflicts arise in two ways: aM/t-reduce conflicts, in which the parser has the option of either shifting a word onto the stack or reducing a set of elements on the stack to a new element; reduce-reduce conflicts, in which reductions by several grammar 3The boundedness comes about because only a finite amount or informatie, n is kept per state (an integer) and the nondeterrninlsm stops at the prcterminat level, so that, the splitting of states does not. propogate, 41 am indebted to Mitch Marcus for this .bservation and the previous comparison with his parser.rules are possible. The parser uses two rules to resolve these conflicts: 5(I) Resolve shift-reduce conflicts by shifting.(2) Resolve reduce-reduce conflicts by performing the longer reduction.These two rules suffice to engender the appropriate behavior in the parser for cases of right association and minimal attachment. Though we demonstrate our system primarily with PP-attachment examples, we claim that the rules are generally valid for the phenomena being modeled [Pereira and Shieber, forthcoming] .Some examples demonstrate these principles. Consider the sentence Joe took the book that I bought for Sum,re.After a certain amount of parsing has beta completed deterministically, the parser will be in the following coniigttration:I NP v that V Ill°r S,... Iwith a shift-reduce confict, since the V can be reduced to a VP/NP ° or the P can be shifted. The principle* presented would solve the conflict in favor of the shift, thereby leading to the following derivation: The sentence 5The original notion of using a shift-reduce parser and general scheduling principles to handle right association and minlmal attachment, together with the following two rules, are due to Fernando Pereira [Pereira, 1982 [. The formalization of preterminal delaying and the extensions to the Ionic tlpreference cases and garden-path behavior are due to the author. 8The "slash-category" analysis of long-distance dependencies used here is loosely based on the work of Gaadar [lggl] . The Appendix 1 grammar does not incorporate the full range of slashed rules, however, but merely a representative selection for illustrative purposes.NP V NP that NP V P l] Su,Joe bou¢ht the book for Su,an.demonstrates resolution of a reduce-reduce conflict. At some point in the parse, the parser is in the following configuration: [ NP VTo handle the lexical-preferenee examples, we extend the second rule slightly. Preterminal-word pairs can be stipulated as either weak or strong. The second rule becomes (2} Resolve reduce-reduce conflicts by performing the longest reduction with the stroncest &ftmost stack element. 7Therefore, if it is assumed that the lexicon encodes the information that the triadic form of ~ant" iV2 in the sample grammar) and the dyadic form of ~position" (V1) are both weak, we can see the operation of the shift-reduce parser on the ~dress on that rack" sentences of Section 2. Both sentences are similar in form and will thus have a similar configuration when the reduce-reduce conflict arises. For example, the first sentence will be in the following configuration:t NP wanted NP PP i[ 120 iIn this case, the longer reduction would require assignment of the preterminat category V2 to ~ant," which is the weak form: thus, the shorter reduction will be preferred, leading to the derivation: 7Note that, strength takes precedence over length.I NPIn the ca~e in which the verb is "positioned," however, the longer reduction does not yield the weak form of the verb; it will therefore be invoked, reslting in the structure: As a side effect of these conflict resolution rules, certain sentences in the language of the grammar will receive no parse by the parsing system just discussed. These sentences are apparently the ones classified as "garden-path" sentences, a class that humans also have great difficulty parsing. Marcus's conjecture that such difficulty stems from a hard failure of the normal sentence-processing mechanism is directly modeled by the parsing system presented here.[For instance, the sentenceThe horse raced past the barn fell exhibits a reduce-reduce conflict before the last word. If the participial form of "raced" is weak, the finite verb form will be chosen; consequently, "raced pant the barn" will be reduced to a VP rather than a participial phrase. The parser will fail shortly, since the correct choice of reduction was not made.That scaly, deep-sea fish ,hould be underwater i~ important.will fail. though grammatical. Before the word %hould" is shifted, a reduce-reduce conflict arises in forming an NP from either "That scaly, deep-sea l~h" or "scaly, deep-sea fish." The longer (incorrect} reduction will be performed and the parser will fail.Other examples, e.g., "the boy got fat melted," or "the prime number few" would be handled similarly by the parser, though the sample grammar of Appendix I does not parse them [Pcreira and Shieber, forthcoming].
null
To be useful, aatttral-language systems must model the behavior, if not the method, of the native speaker. We have demonstrated that a parser using simple general rules for disambiguating sentences can yield appropriate behavior for a large class of performance phenomena--right a-~soeiation, minimal attachment, lexical preference, and garden-path sentences--and that, morever, it can do so deterministically wit, hour generating all the parses and choosing among them. The parsing system has been implemented and has confirmed the feasibility of ottr approach to the modeling of these phenomena.
Main paper: introduction: For natural language processing systems to be useful, they must assign the same interpretation to a given sentence that a native speaker would, since that is precisely the behavior users will expect.. Consider, for example, the case of ambiguous sentences. Native speakers of English show definite and consistent preferences for certain readings of syntactically ambiguous sentences [Kimball, 1973 , Frazier and Fodor, 1978 , Ford et aL, 1982 . A user of a natural-language-processing system would naturally expect, it to reflect the same preferences. Thus, such systems must model in some way the lineuistie performance as well as the linguistic competence of the native speaker. This idea is certainly not new in the artificial-intelligence literature. The pioneering work of Marcus [Marcus, 1980] is perhaps the best. known example of linguistic-performance modeling in AI. Starting from the hypothesis that ~deterministic" parsing of English is possible, he demonstrated that certain performance "This research was supported by the Defense Advanced Research Proiects Agency under Contract NOOO39-80-C-0575 with the Naval Electronic Systems Command. The views and conclusions contained in this document are those of the author and should not be interpreted a.s representative of the oh~cial policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the United States government.constraints, e.g., the difl]culty of parsing garden-path sentences, could be modeled. His claim about deterministic parsing was quite strong. Not only was the behavior of the parser required to be deterministic, but, as Marcus claimed,The interpreter cannot use some general rule to take a nondeterministic grammar specification and impose arbitrary constraints to convert it to a deterministic specification {unless, of course, there is a general rule which will always lead to the correct decision in such a case). [Marcus, 1980, p.14] We have developed and implemented a parsing system that. given a nondeterministic grammar, forces disambiguation in just the manner Marcus rejected (i.e. t .hrough general rules}; it thereby exhibits the same preference behavior that psycbolinguists have attributed to native speakers of English for a certain range of ambiguities. These include structural ambiguities [Frazier and Fodor, 1978 , Frazier and Fodor, 1980 and lexical preferences [Ford et aL, 1982l , as well as the gardenpath sentences as a side effect. The parsing system is based on the shih.-reduee scheduling technique of Pereira [forthcoming] .Our parsing algorithm is a slight variant of LALR{ 1) parsing, and, as such, exhibits the three conditions postulated by Marcus for a deterministic mechanism: it is data-driven, reflects expectations, and has look-ahead. Like Marcus's parser, our parsing system is deterministic. Unlike Marcus's parser, the grammars used by our parser can be ambiguous. the phenomena to be modeled: The parsing system was designed to manifest preferences among ,~tructurally distinct parses of ambiguous sentences. It, does this by building just one parse tree--rather than building multiple parse trees and choosing among them. Like the Marcus parsing system, ours does not do disambiguation requiring "extensive semantic processing," hut, in contrast to Marcus, it does handle such phenomena as PP-attachment insofar as there exist a priori preferences for one attachment over another. By a priori we mean preferences that are exhibited in contexts where pragmatic or plausibility considerations do not tend to favor one reading over the other. Rather than make such value judgments ourselves, we defer to the psycholinguistic literature {specifically [Frazier and Fodor, 1978] , [Frazier and Fodor, 1980] and [Ford et al., 1982] ) for our examples. in which "for Susan* modifies %he book" rather than "bought. " Frazier and Fodor [1978] note that these are canes in which the higher attachment includes fewer nodes in the parse tree. Ore" analysis is somewhat different. Ford et al. [10821 present evidence that attachment preferences depend on lexical choice. Thus, the preferred reading forThe woman wanted the dresm on that rock.has low attachment of the PP, whereasThe tnoman positioned the dreu on that rack.has high attachment.The horse raced pamt the barn fell.seem actually to receive no parse by the native speaker until some sort of "conscioun parsing" is done. Following Marcus [Marcus, 1980] , we take this to be a hard failure of the human sentence-processing mechanism.It will be seen that all these phenomena axe handled in oux parser by the same general rules. The simple context-free grammar used t (see Appendix I) allows both parses of the ambiguous sentences as well as one for the garden-path sentences. The parser disambiguates the grammar and yields only the preferred structure. The actual output of the parsing system can be found in Appendix II. the parsing system: The parsing system we use is a shift-reduce purser. Shiftreduce parsers [Aho and Johnson, 19741 the parse and a shift-reduce table for guiding the parse, At each step in the parse, the table is used for deciding between two basic types of operations: the shift operation, which adds the next word in the sentence (with its pretcrminal category) to the top of the stack, and the reduce operation, which removes several elements from the top of the stack and replaces them with a new element--for instance, removing an NP and a VP from the top of the stack and replacing them with an S. The state of the parser is also updated in accordance with the shift-reduce table at each stage. The combination of the stack, input, and state of the parser will be called a configuration and will be notated as, for example, 1 NPv IIMar, 110 1where the stack contains the nonterminals NP and V, the input contains the lexical item Mary and the parser is in state 10.By way of example, we demonstrate the operation of the parser (using the grammar of Appendix I) on the oft-cited sentence "John loves Mary. ~ Initially the stack is empty and no input has been consumed. The parser begins in state 0.As elements are shifted to the stack, they axe replaced by their preterminal category." T.he shiR-reduce table for the grammar of Appendix I states that in state 0, with a proper noun as the next word in the input, the appropriate action is a shift. the input has been consumed and an S derived. Thus the sentence is grammatical ia the grammar of Appendix I, as expected.The shift-reduce table mentioned above is generated automatically from a context-free grammar by the standard algorithm [Aho and Johnson, 1974] . The parsing alogrithm differs, however, from the standard LALR(1) parsing algorithm in two ways. First, instead of assigning preterminal symbols to words as they are shifted, the algorithm allows the assignment to be delayed if the word is ambiguous among preterminals. When the word is used in a reduction, the appropriate preterminal is assigned.Second, and most importantly, since true LR parsers exist only for unambiguous grammars, the normal algorithm for deriving LALR(1) shift-reduce tables yields a table that may specify conflicting actions under certain configurations. It is through the choice made from the options in a conflict that the preference behavior we desire is engendered.One key advantage of shift-reduce parsing that is critical in our system is the fact that decisions about the structure to be assigned to a phrase are postponed as long as possible. In keeping with this general principle, we extend the algorithm to allow the ~ssignment of a preterminal category to a lexical item to be deferred until a decision is forced upon it, so to speak, by aa encompassing reduction. For instance, we would not want to decide on the preterminal category of the word "that," which can serve as either a determiner (DET) or complementizer (THAT), until some further information is available. Consider the sentences That problem i* important.That problema are difficult to naive ia important.Instead of a.~signiag a preterminal to ~that," we leave open the possibility of assigning either DET or THAT until the first reduction that involves the word. In the first case, this reduction will be by the rule NP ~DET NOM, thus forcing, once and for all, the assignment of DET as preterminal. In the second ease, the DET NOM analysis is disallowed oa the basis of number agreement, so that the first applicable reduction is the COMPS reduction to S, forcing the assignment of THAT as preterminal.Of course, the question arises as to what state the parser goes into after shitting the lexical item ~that." The answer is quite straightforward, though its interpretation t,i~ d t,,a the determinism hypothesis is subtle. The simple answer is that the parser enters into a state corresponding to the union of the states entered upon shifting a DET and upon shifting a THAT respectively, in much the same way as the deterministic simulation of a nondeterministic finite automaton enters a ~uniou" state when faced with a nondeterministic choice. Are we then merely simulating a aoadeterministic machine here. ~ The anss~er is equivocal. Although the implementation acts as a simulator for a nondeterministic machine, the nondeterminism is a priori bounded, given a particular grammar and lexicon. 3 Thus. the nondeterminism could be traded in for a larger, albeit still finite, set of states, unlike the nondeterminism found in other parsing algorithms. Another way of looking at the situation is to note that there is no observable property of the algorithm that would distinguish the operation of the parser from a deterministic one. In some sense, there is no interesting difference between the limited nondeterminism of this parser, and Marcus's notion of strict determinism. In fact, the implementation of Marcus's parser also embodies a bounded nondeterminism in much the same way this parser does.The differentiating property between this parser and that of Marcus is a slightly different one, namely, the property of qaaM-real-time operation. 4 By quasi-real-time operation, Marcus means that there exists a maximum interval of parser operation for which no output can be generated. If the parser operates for longer than this, it must generate some output. For instance, the parser might be guaranteed to produce output (i.e., structure) at least every three words. However, because preterminal assignment can be delayed indefinitely in pathological grammars, there may exist sentences in such grammars for which arbitrary numbers of words need to be read before output can be produced. It is not clear whether this is a real disadvantage or not, and, if so, whether there are simple adjustments to the algorithm that would result in quasi-real-time behavior. In fact, it is a property of bottom-up parsing in general that quasi-real-time behavior is not guaranteed. Our parser has a less restrictive but similar property, fairneaH, that is, our parser generates output linear in the input, though there is no constant over which output is guaranteed. For a fuller discussion of these properties, see Pereira and Shieber [forthcoming].To summarize, preterminal delaying, as an intrinsic part of the algorithm, does not actually change the basic properties of the algorithm in any observable way. Note, however, that preterminal assignments, like reductions, are irrevocable once they are made {as a byproduct of the determinism of the algo-rithm}. Such decisions can therefore lead to garden paths, as they do for the sentences presented in Section 3.6.We now discuss the central feature of the algorithm. namely, the resolution of shift-reduce conflicts.Conflicts arise in two ways: aM/t-reduce conflicts, in which the parser has the option of either shifting a word onto the stack or reducing a set of elements on the stack to a new element; reduce-reduce conflicts, in which reductions by several grammar 3The boundedness comes about because only a finite amount or informatie, n is kept per state (an integer) and the nondeterrninlsm stops at the prcterminat level, so that, the splitting of states does not. propogate, 41 am indebted to Mitch Marcus for this .bservation and the previous comparison with his parser.rules are possible. The parser uses two rules to resolve these conflicts: 5(I) Resolve shift-reduce conflicts by shifting.(2) Resolve reduce-reduce conflicts by performing the longer reduction.These two rules suffice to engender the appropriate behavior in the parser for cases of right association and minimal attachment. Though we demonstrate our system primarily with PP-attachment examples, we claim that the rules are generally valid for the phenomena being modeled [Pereira and Shieber, forthcoming] .Some examples demonstrate these principles. Consider the sentence Joe took the book that I bought for Sum,re.After a certain amount of parsing has beta completed deterministically, the parser will be in the following coniigttration:I NP v that V Ill°r S,... Iwith a shift-reduce confict, since the V can be reduced to a VP/NP ° or the P can be shifted. The principle* presented would solve the conflict in favor of the shift, thereby leading to the following derivation: The sentence 5The original notion of using a shift-reduce parser and general scheduling principles to handle right association and minlmal attachment, together with the following two rules, are due to Fernando Pereira [Pereira, 1982 [. The formalization of preterminal delaying and the extensions to the Ionic tlpreference cases and garden-path behavior are due to the author. 8The "slash-category" analysis of long-distance dependencies used here is loosely based on the work of Gaadar [lggl] . The Appendix 1 grammar does not incorporate the full range of slashed rules, however, but merely a representative selection for illustrative purposes.NP V NP that NP V P l] Su,Joe bou¢ht the book for Su,an.demonstrates resolution of a reduce-reduce conflict. At some point in the parse, the parser is in the following configuration: [ NP VTo handle the lexical-preferenee examples, we extend the second rule slightly. Preterminal-word pairs can be stipulated as either weak or strong. The second rule becomes (2} Resolve reduce-reduce conflicts by performing the longest reduction with the stroncest &ftmost stack element. 7Therefore, if it is assumed that the lexicon encodes the information that the triadic form of ~ant" iV2 in the sample grammar) and the dyadic form of ~position" (V1) are both weak, we can see the operation of the shift-reduce parser on the ~dress on that rack" sentences of Section 2. Both sentences are similar in form and will thus have a similar configuration when the reduce-reduce conflict arises. For example, the first sentence will be in the following configuration:t NP wanted NP PP i[ 120 iIn this case, the longer reduction would require assignment of the preterminat category V2 to ~ant," which is the weak form: thus, the shorter reduction will be preferred, leading to the derivation: 7Note that, strength takes precedence over length.I NPIn the ca~e in which the verb is "positioned," however, the longer reduction does not yield the weak form of the verb; it will therefore be invoked, reslting in the structure: As a side effect of these conflict resolution rules, certain sentences in the language of the grammar will receive no parse by the parsing system just discussed. These sentences are apparently the ones classified as "garden-path" sentences, a class that humans also have great difficulty parsing. Marcus's conjecture that such difficulty stems from a hard failure of the normal sentence-processing mechanism is directly modeled by the parsing system presented here.[For instance, the sentenceThe horse raced past the barn fell exhibits a reduce-reduce conflict before the last word. If the participial form of "raced" is weak, the finite verb form will be chosen; consequently, "raced pant the barn" will be reduced to a VP rather than a participial phrase. The parser will fail shortly, since the correct choice of reduction was not made.That scaly, deep-sea fish ,hould be underwater i~ important.will fail. though grammatical. Before the word %hould" is shifted, a reduce-reduce conflict arises in forming an NP from either "That scaly, deep-sea l~h" or "scaly, deep-sea fish." The longer (incorrect} reduction will be performed and the parser will fail.Other examples, e.g., "the boy got fat melted," or "the prime number few" would be handled similarly by the parser, though the sample grammar of Appendix I does not parse them [Pcreira and Shieber, forthcoming]. conclusion: To be useful, aatttral-language systems must model the behavior, if not the method, of the native speaker. We have demonstrated that a parser using simple general rules for disambiguating sentences can yield appropriate behavior for a large class of performance phenomena--right a-~soeiation, minimal attachment, lexical preference, and garden-path sentences--and that, morever, it can do so deterministically wit, hour generating all the parses and choosing among them. The parsing system has been implemented and has confirmed the feasibility of ottr approach to the modeling of these phenomena. Appendix:
null
null
null
null
{ "paperhash": [ "baltin|the_mental_representation_of_grammatical_relations", "marcus|a_theory_of_syntactic_recognition_for_natural_language", "aho|lr_parsing", "pereira|natural_language_parsing:_a_new_characterization_of_attachment_preferences" ], "title": [ "The Mental representation of grammatical relations", "A theory of syntactic recognition for natural language", "LR Parsing", "Natural language parsing: A new characterization of attachment preferences" ], "abstract": [ "The editor of this volume, who is also author or coauthor of five of the contributions, has provided an introduction that not only affords an overview of the separate articles but also interrelates the basic issues in linguistics, psycholinguistics and cognitive studies that are addressed in this volume. The twelve articles are grouped into three sections, as follows: \"I. Lexical Representation: \" The Passive in Lexical Theory (J. Bresnan); On the Lexical Representation of Romance Reflexive Clitics (J. Grimshaw); and Polyadicity (J. Bresnan).\"II. Syntactic Representation: \" Lexical-Functional Grammar: A Formal Theory for Grammatical Representation (R. Kaplan and J. Bresnan); Control and Complementation (J. Bresnan); Case Agreement in Russian (C. Neidle); The Representation of Case in Icelandic (A. Andrews); Grammatical Relations and Clause Structure in Malayalam (K. P. Monahan); and Sluicing: A Lexical Interpretation Procedure (L. Levin).\"III. Cognitive Processing of Grammatical Representations: \" A Theory of the Acquisition of Lexical Interpretive Grammars (S. Pinker); Toward a Theory of Lexico-Syntactic Interactions in Sentence Perception (M. Ford, J. Bresnan, and R. Kaplan); and Sentence Planning Units: Implications for the Speaker's Representation of Meaningful Relations Underlying Sentences (M. Ford).", "Abstract : Assume that the syntax of natural language can be parsed by a left-to-right deterministic mechanism without facilities for parallelism or backup. It will be shown that this 'determinism' hypothesis, explored within the context of the grammar of English, leads to a simple mechanism, a grammar interpreter. (Author)", "The LR syn tax analysis method is a useful and versat i le technique for parsing determinis t ic context-free languages in compil ing applicat ions. This paper provides an informal exposit ion of L R parsing techniques emphasizing the mechanical generat ion of efficient L R parsers for context-free grammars . Pa r t i cu la r a t t e n t m n is given to extending the parser generat ion techniques to apply to ambiguous grammars .", "Abstract : Several authors have tried to model attachment preferences for structurally ambiguous sentences that cannot be disambiguated from semantic information. These models lack rigor and have been widely criticized. By starting from a precise choice of parsing model, it is possible to give a simple and rigorous description of Minimal Attachment and Right Association that avoids some of the problems of other models." ], "authors": [ { "name": [ "M. Baltin", "Joan Bresnan" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Mitchell P. Marcus" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "A. Aho", "Stephen C. Johnson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "F. Pereira" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null ], "s2_corpus_id": [ "267857650", "6616065", "3254307", "57529222" ], "intents": [ [], [ "background", "methodology" ], [], [ "methodology" ] ], "isInfluential": [ false, true, false, false ] }
null
500
0.18
null
null
null
null
null
null
null
null
4230b90675bb40aea862c95830e4c649e3cd6d66
287454
null
A Modal Temporal Logic for Reasoning about Change
We examine several behaviors for query systems that become possible with the ability to represent and reason about change in data bases: queries about possible futures, queries about alternative histories, and offers of monitors as responses to queries. A modal temporal logic is developed for this purpose. A completion axiom for history is given and modelling strategies are given by example.
{ "name": [ "Mays, Eric" ], "affiliation": [ null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
13
15
null
In this paper we present a modal temporal logic that has been developed for reasoning about change in data bases. The basic motivation is as follows. A data base contains information about the world: as the world changes, so does the data base --probably maintaining some description of what the world was like before the change took place. Moreover, if the world is constrained In the ways it can change, so is the dat~ base. We are motivated by the benefits to be gained by being able to represent those constraints and use them to reason about the possible states of a data base.It is generally accepted that a natural language query system often needs to provide more than just the literal answer to a question. For example, [Kaplan 82I presents methods for correcting a questionerls misconceptions (as reflected in a query) about the contents of a data base, as well as providing additional information in suvport of the literal answer to a query, By enriching the data base model, Kaplan's work on correcting misconceptions was extended in [Mays 801 to distinquish between misconceptions about data base structure and data base contents. In either case, however, the model was a static one. By incorporating a model of the data base in which a dynamic view is allowed, answers to questions can include an offer to monitor for some condition which might possibly occur in the future. The following is an example: U: "Is the Kitty Hawk in Norfolk?" S: "No, shall I let you know when she is?"IThJs work is partially supported by a grant from the Natlonal Science Foundation, NSF-MCS 81-07290.But just having a dynamic view is not adequate, it is necessary--r-y--~at the dynamic view correspond to the possible evolution of the world that is modelled. Otherwise, behaviors such as the following might arise: U: "Is New York less than 50 miles from Philadelphia?" S: "No, shall I let you know when it is?" An offer of a monitor is said to be competent only if the conditlon to be monitored can possibly occur. Thus, in the latter example the offer is not competent, while in the former it is. This paper is concerned with developing a lo~ic for reasoning about change in data bases, and assessing the impact of that capability on the behavior of question answering systems. The general area of extended interaction in data base systems is discussed in [WJMM 831.As just pointed out, the ability to represent and reason about change in data bases affects the range and quality of responses that may be produced by a query system. Reasoning about prior possibllty admits a class of queries dealing with the future possibility of some event or state of affairs at some time in the past. These queries have the general form:"Could it have been the case that p?" This class of queries will be termed counterhistoricals in an attempt to draw some parallel with counterfactuals. The future correlate of counterhistoricals, which one might call futurities, are of the form: "Can it be the case that p?" i.e. in the sense of: "Might it ever be the case that p?" The most interesting aspect of this form of question is that it admits the ability for a query system to offer a monitor as a response to a question for relevant information the system may become aware of at some future time. A query system can only competently offer such monitors when it has this ability, since otherwise it cannot determine if the monitor may ever be satisfied.We have chosen to use a modal temporal logic. There are two basic requirements which lead us toward logic and away from methods such as Petri nets. F~rst, it may be desirable to assert that some proposition is the case without necessarily specifying exactly when.Secondly, our knowledge may be disjunctive. That is, our knowledge of temporal situations may be incomplete and indefinite, and as others have argued [Moore 821 (as a recent example), methods based on formal logic (though usually flrst-order) are the only ones that have so far been capable of dealing with problems of this nature.In contrast to flrst-order representations, modal temporal logic makes a fundamental distinction between variability over time (as expressed by modal temporal operators) and variability in a state (as expressed using propositional or flrst-order languages). Modal temporal logic also reflects the temporally indefinite structure of language in a way that is more natural than the commaon method of using state variables and constants in a flrst-order logic. On the side of flrst-order logic, however, is expressive power that is not necessarily present in modal temporal logic. (But, see [K amp 68] and [GPSS 80 ] for comparisons of the expressive power of modal temporal logics with flrst-order theories.)There are several possible structures that one could reasonably imagine over states in time. The one we have in mind is discrete, backwards linear, and infinite in both directions. We allow branching into the future to capture the idea that it is open, but the past is determined.Due to the nature of the intended application, we also have assumed that time is discrete. It should be stressed that this decision Is not motivated by the belief that time itself is discrete, but rather by the data base application. Furthermore, in cases where it is necessary for the temporal structure to be dense or continuous, there is no immediate argument against modal temporal logic in general. (That Is, one could develop a modal temporal logic that models a continuous structure of time [RU 71].)A modal temporal structure is composed of a set oP states. Each state is a set of propositions which are true of that state. States are related by an immediate predecessor-successor relation. A branch of time is defined by taking some possible sequence of states accessible over this relation from a given state. The future fragment of the logic is based on the unified branching temporal logic of [BMP 81], which introduces branches and quantifies over them to make it possible to describe properties on some or all futures. Thls is extended with an "until" operator (as in [K amp 68] , [GPSS 801 ) and a past fragment.Since the structures are backwards linear the existential and universal operators are merged to form a linear past fragment.Formulas are composed from the symbols, -A set ~of atomic propositions.Boolean connectives: v, -. Temporal operators:AX (every next), EX (some next), AG (every always), EG (some always), AF (every eventually), EF (some eventually), AU (every until), EU (some until), L (immediately past), P (sometime past), H (always past), S (since).AU, EU, and S are binary; the others are unary. For the operators composed of two symbols, the first symbol ("A" or "E") can be thought of as quantifying universally or existentially over branches in time; the second symbol as quantifying over states within the branch. Since branching is not allowed into the past, past operators have only one symbol.using the rules, -If p~, then p is a formula. At'((t'~b & s>t'>t) -9 <T,t'>l=p))) (q is true at some time of the past and since q is true p is true) A formula p is valid iff for every structure T and every state s in T, <T,s> I= p.As noted earlier, this logic was developed to reason about change in data bases. Although ultlmately the application requires extension to a flrst-order language to better express varlabillty within a state, for now we are restricted to the propositional case. Such an extenslon is not wfthout problems, but should be manageable.The set of propositional variables for modelling change in data bases is divided into two classes. A state proposition asserts the truth of some atomic condition. An event proposition associates the occurence of an event with the state in which it occurs. The idea is to impose constraints on the occurence of events and then derive the appropriate state description. To be specfic, let Osl...Qsn be state propositions and Qel...Oem be event propos~tlons. If PHI is a boolean formula of state propositions, then formulas of the form: (PHI -9 EX Qei) are event constraints. To derive state descriptions from events frame axioms are required: (Qei -9 ((L PHIl) -9 PHI2)), where PHIl and PHI2 are boolean ~ormulas of state propositions.In the blocks world, and event constraint would be that If block A was clear and block B was clear then move A onto B is a next possible event: ((cleartop(A) & cleartop(B)) -9 EX move(A,B)). Two frame axioms are: (move(A,B) -9 on(A,B)) and (move(A,B) --> ((L on(C,D)) -9 on(C,D))).If the modelling strategy was left as just outlined, nothing very significant would have been accomplished. Indeed, a simpler strategy would be hard to imagine, other than requiring that the state formulas be a complete description. This can be improved in two non-trivial ways. The first is that the conditions on the transitions may reference states earlier than the last one. ~econdly, we may require that certain conditions might or must eventually happen, but'not necessarily next.As mentioned earller, these capabilities are important consideratlons for us. By placing biconditionals on the event constraints, it can be determined that some condition may never arise, or from knowledge of some event a reconstruction of the previous state may be obtained.The form of the frame axioms may be inverted using the until operator to obtain a form that is perhaps more intuitive. As specified above the form of the frame axioms will yield identical previous and next state propositions for those events that have no effect on them. The standard example from the blocks world is that moving a block does not alter the color of the block. If there are a lot uf events llke move that don't change block color, there will be a lot of frame axioms around stating that the events don't change the block color. But if there is only one event, say paint, that changes the color of the block, the "every until" (AU) operator can be used to state that the color of the block stays the same unti] it is painted. This strategy works best if we maintain a single event condition for each state; i.e, no more than a single event can occur In each state. For each application, a decision must be made as to how to best represent the frame axioms. Of course, if the world is very complicated, there will be a lot of complicated frame axioms. I see no easy way around this problem in this logic.As previously mentioned, we assume that the past is determined (i.e. backwards linear). However this does not imply that our knowledge of the past is complete. Since in some cases we may wish to claim complete knowledge with respect to one or more predicates in the past, a completion axiom is developed for an intuitively natural conception of history. Examples of predicates for which our knowledge might be complete are presidential inaugurations, employees of a company, and courses taken by someone in college.In a first order theory, T, the completion axiom with respect to the predicate Q where (Q cl)...(Q cn) are the only occurences of Q in T is:Ax((Q x) ~-~ x=cl v...vFrom right to left on the bicondltional this just says what the orginal theory T did, that Q is true of cl...cn. The completion occurs from left to right, asserting that cl...cn are the only constants for which Q holds. Thus for some c' which is not equal to any of cl...cn, it is provable in the completed theory that ~(Q c'), which was not provable in the original theory T. This axiom captures our intuitive notions about Q. 2 The completion axiom for temporal logic is developed by introducing time propositions. The idea is that a conjunct of a time proposition, T, and some other proposition, Q, denotes that Q is true at time T. If time propositions are linearly ordered, and Q occurs only in the form P(Q & TI) &...& P(Q & Tn) in some theory M, then the h~story completion axiom for M with respect to Q is H(Q 4--> T1 v...v Tn).Analogous to the firstorder completion axiom, the direction from left to right is the completion of Q. An equivalent firstorder theory to M in which each temporal proposition Ti is a first-order constant tl and Q is a monadic predicate, (Q tl) &...& (Q tn), has the flrst-order completion axiom (with Q restricted to time constants of the past, where tO is now):Ax<t0 ((Q x) ~-+ x=tl v...v x=tn).The propositional variables T-reg, T-add, Tdrop, T-enroll, and T-break are time points intended to denote periods in the academic semster on which certain activities regarding enrollment for courses is dependent. The event proposition are Qe-reg, Qe-pass, Qe-fail, and Qe-drop; for registering for a course, passing a course, failing a course, and dropping a couirse,respectively. The only state is Qs-reg, which means that a student is registered for a course. A counterhistorlcal may be thought of as a special case of a counterfactual, where rather than asking the counterfactual, "If kangaroos did not have tails would they topple over?", one asks instead "Could I have taken CSEII0 last semester?". That is, counterfac=uals suppose that the present state of affairs is slightly different and then question the consequences. Counterhlstorlcals, on the other hand, question how a course of events might have proceeded otherwise. If we picture the underlying temporal structure, we See that althouKh there are no branches into the past, there are branches from the past into the future. These are alternative histories to the one we are actually in. Counterhlstoricals explore these alternate histories.Intuitively, a counterhistorlcal may be evaluated by "rolling back" to some previous state and then reasoning forward, dlsregarding any events that actually took place after that state, to determine whether the speclfied condition might arise. For the question, "Could I have registered for CSEII0 last semester?", we access the state specified by last semester, and from that state description, reason forward regarding the possibility of registering for CSEII0.However, a counterhistorlcal is really only interesting if there is some way in which the course of events is constrained. These constraints may be legal, physical, moral, bureaucratic, or a whole host of others. The set of axioms in the previous section is one example. The formalism does not provide any facility to dlstinquish between various sorts of constraints. Thus the mortal inevitability that everyone eventually dies is given the same importance as a university rule that you can't take the same course twice.In the logic, the general counterhistorical has the form: P(EFp). That is, is there some time in the past at which there is a future time when p might possibly be true. Constraints may be placed on the prior time: P(q & EFp), e.g. "When I was a sophomore, could I have taken Phil 6?".One might wish to require that some other condition still be accessible: P(EF(p & EFq)), e.g. "Could I have taken CSE220 and then CSEII0?"; or that the counterhistorical be immediate from the most recent state: L(EXp).(The latter is interesting in what it has to say about possible alternatives to --or the inevitability of --what is the case now. [WM 831 shows its use in recognizing and correcting eventrelated misconceptions.)For example, in the registration domain if we know that someone has passed a course then we can derive from the axioms above the counterhistorical that they could have not passed: ((P Qe-pass) -+ P(EF-Qe-pass).A query regarding future possibility has the general logical form:EFp. That is, is there some future time in which p is true. The basic variations are: AFp, must p eventually be true; EGp, can p remain true; AGp, must p remain true. These can be nested to produce infinite variation. However, answering direct questions about future possibility is not the only use to be made of futurities. In addition, futurities permit the query system to competently offer monitors as responses to questions. (A monitor watches for some specified condition to arise and then performs some action, usually notification that the condition has occurred.) A monitor can only be offered competently if it can be shown that the condition might possibly arise, given the present state of the data base. Note that if any of the stronger forms of future possibility can be derived it would be desirable to provide information to that effect.For example, if a student is not registered for a course and has not passed the course and the time wasprior to enrollment, a monitor for the student registering would be competently made given some question about registration, since ((~Qs-reg & -(P Qe-pass) & ~X(AF Te)) -+ (EF Qe-reg)).However, if the student had previously passed the course, the monitor offer would not be competent, since ((-Qs-reg & (P Qe-pass) & AX(AF Te)) -+ -(EF Qe-reg)).Note that if a monitor was explicity requested, "Let me know when p happens," a futurity may be used to determine whether p might ever happen. In addition to the processing efficiency gained by discarding monitors that can never be satisfied, one is also in a position to correct a user's mistaken belief that p might ever happen, since in order to make such a request s/he must believe p could happen.Corrections of this sort arise from Intensional failures of presumptions in the sense of [Mays gOl and [WM 8~I . If at some future time from the monitor request, due to some intervening events p can no longer happen, but was originally possible, an extensional failure of the presumption (in the sense of [Kaplan 82]) might be said to have occurred.The application of the constraints when attempting to determine the validity of an update to the data base is important to the determination of monitor competence. The approach we have adopted is to require that when some formula p is considered as a potential addition to the data base that it be provable that EXp. Alternatively one could just require that the update not be inconsistent, that is not provable chat .~X~p. The former approach is preferred since it does not make any requirement on decidability. Thus, in order to say that a monitor for some condition p [s competent, it must be provable that EFp.This work has been influenced most strongly by work within theory of computation on proving program correctness (IBMP 811 and [GPSS 801) and within philosophy on temporal logic [RU 711..The work within AI that is most relevant is that of [McDermott 821 . Two of McDermott's major points are regarding the openess of the future and the continuity of time. With the first of these we are in agreement, but on the second we differ. This difference is largely due to the intended application of the logic. Ours is applied to changes in data base states (which are discrete), whereas McDermott's is physical systems (which are continuous). But even within the domain of physical systems it may be worthwhile to consider discrete structures as a tool for abstraction, for which computational methods may prove to be more tractable. At least by considering modal temporal logics we may be able to gain some insight into the reasoning process whether over discrete or continuous structures.We have not made at serlous effort towards implementation thus far. A tableau based theorem prover has been implemented for the future fragment based on the procedure given in [BMP 81] . It is able to do problems about one-half the size of the example given here. Based on this limited experience we have a few Ideas which might improve its abilities. Another procedure based on the tableau method which is based on ideas from [BMP 81] and [RU 71] has been developed but we are not sufficiently confident In its correctness to present ft at this point.
null
null
null
null
Main paper: i introduction: In this paper we present a modal temporal logic that has been developed for reasoning about change in data bases. The basic motivation is as follows. A data base contains information about the world: as the world changes, so does the data base --probably maintaining some description of what the world was like before the change took place. Moreover, if the world is constrained In the ways it can change, so is the dat~ base. We are motivated by the benefits to be gained by being able to represent those constraints and use them to reason about the possible states of a data base.It is generally accepted that a natural language query system often needs to provide more than just the literal answer to a question. For example, [Kaplan 82I presents methods for correcting a questionerls misconceptions (as reflected in a query) about the contents of a data base, as well as providing additional information in suvport of the literal answer to a query, By enriching the data base model, Kaplan's work on correcting misconceptions was extended in [Mays 801 to distinquish between misconceptions about data base structure and data base contents. In either case, however, the model was a static one. By incorporating a model of the data base in which a dynamic view is allowed, answers to questions can include an offer to monitor for some condition which might possibly occur in the future. The following is an example: U: "Is the Kitty Hawk in Norfolk?" S: "No, shall I let you know when she is?"IThJs work is partially supported by a grant from the Natlonal Science Foundation, NSF-MCS 81-07290.But just having a dynamic view is not adequate, it is necessary--r-y--~at the dynamic view correspond to the possible evolution of the world that is modelled. Otherwise, behaviors such as the following might arise: U: "Is New York less than 50 miles from Philadelphia?" S: "No, shall I let you know when it is?" An offer of a monitor is said to be competent only if the conditlon to be monitored can possibly occur. Thus, in the latter example the offer is not competent, while in the former it is. This paper is concerned with developing a lo~ic for reasoning about change in data bases, and assessing the impact of that capability on the behavior of question answering systems. The general area of extended interaction in data base systems is discussed in [WJMM 831.As just pointed out, the ability to represent and reason about change in data bases affects the range and quality of responses that may be produced by a query system. Reasoning about prior possibllty admits a class of queries dealing with the future possibility of some event or state of affairs at some time in the past. These queries have the general form:"Could it have been the case that p?" This class of queries will be termed counterhistoricals in an attempt to draw some parallel with counterfactuals. The future correlate of counterhistoricals, which one might call futurities, are of the form: "Can it be the case that p?" i.e. in the sense of: "Might it ever be the case that p?" The most interesting aspect of this form of question is that it admits the ability for a query system to offer a monitor as a response to a question for relevant information the system may become aware of at some future time. A query system can only competently offer such monitors when it has this ability, since otherwise it cannot determine if the monitor may ever be satisfied.We have chosen to use a modal temporal logic. There are two basic requirements which lead us toward logic and away from methods such as Petri nets. F~rst, it may be desirable to assert that some proposition is the case without necessarily specifying exactly when.Secondly, our knowledge may be disjunctive. That is, our knowledge of temporal situations may be incomplete and indefinite, and as others have argued [Moore 821 (as a recent example), methods based on formal logic (though usually flrst-order) are the only ones that have so far been capable of dealing with problems of this nature.In contrast to flrst-order representations, modal temporal logic makes a fundamental distinction between variability over time (as expressed by modal temporal operators) and variability in a state (as expressed using propositional or flrst-order languages). Modal temporal logic also reflects the temporally indefinite structure of language in a way that is more natural than the commaon method of using state variables and constants in a flrst-order logic. On the side of flrst-order logic, however, is expressive power that is not necessarily present in modal temporal logic. (But, see [K amp 68] and [GPSS 80 ] for comparisons of the expressive power of modal temporal logics with flrst-order theories.)There are several possible structures that one could reasonably imagine over states in time. The one we have in mind is discrete, backwards linear, and infinite in both directions. We allow branching into the future to capture the idea that it is open, but the past is determined.Due to the nature of the intended application, we also have assumed that time is discrete. It should be stressed that this decision Is not motivated by the belief that time itself is discrete, but rather by the data base application. Furthermore, in cases where it is necessary for the temporal structure to be dense or continuous, there is no immediate argument against modal temporal logic in general. (That Is, one could develop a modal temporal logic that models a continuous structure of time [RU 71].)A modal temporal structure is composed of a set oP states. Each state is a set of propositions which are true of that state. States are related by an immediate predecessor-successor relation. A branch of time is defined by taking some possible sequence of states accessible over this relation from a given state. The future fragment of the logic is based on the unified branching temporal logic of [BMP 81], which introduces branches and quantifies over them to make it possible to describe properties on some or all futures. Thls is extended with an "until" operator (as in [K amp 68] , [GPSS 801 ) and a past fragment.Since the structures are backwards linear the existential and universal operators are merged to form a linear past fragment.Formulas are composed from the symbols, -A set ~of atomic propositions.Boolean connectives: v, -. Temporal operators:AX (every next), EX (some next), AG (every always), EG (some always), AF (every eventually), EF (some eventually), AU (every until), EU (some until), L (immediately past), P (sometime past), H (always past), S (since).AU, EU, and S are binary; the others are unary. For the operators composed of two symbols, the first symbol ("A" or "E") can be thought of as quantifying universally or existentially over branches in time; the second symbol as quantifying over states within the branch. Since branching is not allowed into the past, past operators have only one symbol.using the rules, -If p~, then p is a formula. At'((t'~b & s>t'>t) -9 <T,t'>l=p))) (q is true at some time of the past and since q is true p is true) A formula p is valid iff for every structure T and every state s in T, <T,s> I= p.As noted earlier, this logic was developed to reason about change in data bases. Although ultlmately the application requires extension to a flrst-order language to better express varlabillty within a state, for now we are restricted to the propositional case. Such an extenslon is not wfthout problems, but should be manageable.The set of propositional variables for modelling change in data bases is divided into two classes. A state proposition asserts the truth of some atomic condition. An event proposition associates the occurence of an event with the state in which it occurs. The idea is to impose constraints on the occurence of events and then derive the appropriate state description. To be specfic, let Osl...Qsn be state propositions and Qel...Oem be event propos~tlons. If PHI is a boolean formula of state propositions, then formulas of the form: (PHI -9 EX Qei) are event constraints. To derive state descriptions from events frame axioms are required: (Qei -9 ((L PHIl) -9 PHI2)), where PHIl and PHI2 are boolean ~ormulas of state propositions.In the blocks world, and event constraint would be that If block A was clear and block B was clear then move A onto B is a next possible event: ((cleartop(A) & cleartop(B)) -9 EX move(A,B)). Two frame axioms are: (move(A,B) -9 on(A,B)) and (move(A,B) --> ((L on(C,D)) -9 on(C,D))).If the modelling strategy was left as just outlined, nothing very significant would have been accomplished. Indeed, a simpler strategy would be hard to imagine, other than requiring that the state formulas be a complete description. This can be improved in two non-trivial ways. The first is that the conditions on the transitions may reference states earlier than the last one. ~econdly, we may require that certain conditions might or must eventually happen, but'not necessarily next.As mentioned earller, these capabilities are important consideratlons for us. By placing biconditionals on the event constraints, it can be determined that some condition may never arise, or from knowledge of some event a reconstruction of the previous state may be obtained.The form of the frame axioms may be inverted using the until operator to obtain a form that is perhaps more intuitive. As specified above the form of the frame axioms will yield identical previous and next state propositions for those events that have no effect on them. The standard example from the blocks world is that moving a block does not alter the color of the block. If there are a lot uf events llke move that don't change block color, there will be a lot of frame axioms around stating that the events don't change the block color. But if there is only one event, say paint, that changes the color of the block, the "every until" (AU) operator can be used to state that the color of the block stays the same unti] it is painted. This strategy works best if we maintain a single event condition for each state; i.e, no more than a single event can occur In each state. For each application, a decision must be made as to how to best represent the frame axioms. Of course, if the world is very complicated, there will be a lot of complicated frame axioms. I see no easy way around this problem in this logic.As previously mentioned, we assume that the past is determined (i.e. backwards linear). However this does not imply that our knowledge of the past is complete. Since in some cases we may wish to claim complete knowledge with respect to one or more predicates in the past, a completion axiom is developed for an intuitively natural conception of history. Examples of predicates for which our knowledge might be complete are presidential inaugurations, employees of a company, and courses taken by someone in college.In a first order theory, T, the completion axiom with respect to the predicate Q where (Q cl)...(Q cn) are the only occurences of Q in T is:Ax((Q x) ~-~ x=cl v...vFrom right to left on the bicondltional this just says what the orginal theory T did, that Q is true of cl...cn. The completion occurs from left to right, asserting that cl...cn are the only constants for which Q holds. Thus for some c' which is not equal to any of cl...cn, it is provable in the completed theory that ~(Q c'), which was not provable in the original theory T. This axiom captures our intuitive notions about Q. 2 The completion axiom for temporal logic is developed by introducing time propositions. The idea is that a conjunct of a time proposition, T, and some other proposition, Q, denotes that Q is true at time T. If time propositions are linearly ordered, and Q occurs only in the form P(Q & TI) &...& P(Q & Tn) in some theory M, then the h~story completion axiom for M with respect to Q is H(Q 4--> T1 v...v Tn).Analogous to the firstorder completion axiom, the direction from left to right is the completion of Q. An equivalent firstorder theory to M in which each temporal proposition Ti is a first-order constant tl and Q is a monadic predicate, (Q tl) &...& (Q tn), has the flrst-order completion axiom (with Q restricted to time constants of the past, where tO is now):Ax<t0 ((Q x) ~-+ x=tl v...v x=tn).The propositional variables T-reg, T-add, Tdrop, T-enroll, and T-break are time points intended to denote periods in the academic semster on which certain activities regarding enrollment for courses is dependent. The event proposition are Qe-reg, Qe-pass, Qe-fail, and Qe-drop; for registering for a course, passing a course, failing a course, and dropping a couirse,respectively. The only state is Qs-reg, which means that a student is registered for a course. A counterhistorlcal may be thought of as a special case of a counterfactual, where rather than asking the counterfactual, "If kangaroos did not have tails would they topple over?", one asks instead "Could I have taken CSEII0 last semester?". That is, counterfac=uals suppose that the present state of affairs is slightly different and then question the consequences. Counterhlstorlcals, on the other hand, question how a course of events might have proceeded otherwise. If we picture the underlying temporal structure, we See that althouKh there are no branches into the past, there are branches from the past into the future. These are alternative histories to the one we are actually in. Counterhlstoricals explore these alternate histories.Intuitively, a counterhistorlcal may be evaluated by "rolling back" to some previous state and then reasoning forward, dlsregarding any events that actually took place after that state, to determine whether the speclfied condition might arise. For the question, "Could I have registered for CSEII0 last semester?", we access the state specified by last semester, and from that state description, reason forward regarding the possibility of registering for CSEII0.However, a counterhistorlcal is really only interesting if there is some way in which the course of events is constrained. These constraints may be legal, physical, moral, bureaucratic, or a whole host of others. The set of axioms in the previous section is one example. The formalism does not provide any facility to dlstinquish between various sorts of constraints. Thus the mortal inevitability that everyone eventually dies is given the same importance as a university rule that you can't take the same course twice.In the logic, the general counterhistorical has the form: P(EFp). That is, is there some time in the past at which there is a future time when p might possibly be true. Constraints may be placed on the prior time: P(q & EFp), e.g. "When I was a sophomore, could I have taken Phil 6?".One might wish to require that some other condition still be accessible: P(EF(p & EFq)), e.g. "Could I have taken CSE220 and then CSEII0?"; or that the counterhistorical be immediate from the most recent state: L(EXp).(The latter is interesting in what it has to say about possible alternatives to --or the inevitability of --what is the case now. [WM 831 shows its use in recognizing and correcting eventrelated misconceptions.)For example, in the registration domain if we know that someone has passed a course then we can derive from the axioms above the counterhistorical that they could have not passed: ((P Qe-pass) -+ P(EF-Qe-pass).A query regarding future possibility has the general logical form:EFp. That is, is there some future time in which p is true. The basic variations are: AFp, must p eventually be true; EGp, can p remain true; AGp, must p remain true. These can be nested to produce infinite variation. However, answering direct questions about future possibility is not the only use to be made of futurities. In addition, futurities permit the query system to competently offer monitors as responses to questions. (A monitor watches for some specified condition to arise and then performs some action, usually notification that the condition has occurred.) A monitor can only be offered competently if it can be shown that the condition might possibly arise, given the present state of the data base. Note that if any of the stronger forms of future possibility can be derived it would be desirable to provide information to that effect.For example, if a student is not registered for a course and has not passed the course and the time wasprior to enrollment, a monitor for the student registering would be competently made given some question about registration, since ((~Qs-reg & -(P Qe-pass) & ~X(AF Te)) -+ (EF Qe-reg)).However, if the student had previously passed the course, the monitor offer would not be competent, since ((-Qs-reg & (P Qe-pass) & AX(AF Te)) -+ -(EF Qe-reg)).Note that if a monitor was explicity requested, "Let me know when p happens," a futurity may be used to determine whether p might ever happen. In addition to the processing efficiency gained by discarding monitors that can never be satisfied, one is also in a position to correct a user's mistaken belief that p might ever happen, since in order to make such a request s/he must believe p could happen.Corrections of this sort arise from Intensional failures of presumptions in the sense of [Mays gOl and [WM 8~I . If at some future time from the monitor request, due to some intervening events p can no longer happen, but was originally possible, an extensional failure of the presumption (in the sense of [Kaplan 82]) might be said to have occurred.The application of the constraints when attempting to determine the validity of an update to the data base is important to the determination of monitor competence. The approach we have adopted is to require that when some formula p is considered as a potential addition to the data base that it be provable that EXp. Alternatively one could just require that the update not be inconsistent, that is not provable chat .~X~p. The former approach is preferred since it does not make any requirement on decidability. Thus, in order to say that a monitor for some condition p [s competent, it must be provable that EFp.This work has been influenced most strongly by work within theory of computation on proving program correctness (IBMP 811 and [GPSS 801) and within philosophy on temporal logic [RU 711..The work within AI that is most relevant is that of [McDermott 821 . Two of McDermott's major points are regarding the openess of the future and the continuity of time. With the first of these we are in agreement, but on the second we differ. This difference is largely due to the intended application of the logic. Ours is applied to changes in data base states (which are discrete), whereas McDermott's is physical systems (which are continuous). But even within the domain of physical systems it may be worthwhile to consider discrete structures as a tool for abstraction, for which computational methods may prove to be more tractable. At least by considering modal temporal logics we may be able to gain some insight into the reasoning process whether over discrete or continuous structures.We have not made at serlous effort towards implementation thus far. A tableau based theorem prover has been implemented for the future fragment based on the procedure given in [BMP 81] . It is able to do problems about one-half the size of the example given here. Based on this limited experience we have a few Ideas which might improve its abilities. Another procedure based on the tableau method which is based on ideas from [BMP 81] and [RU 71] has been developed but we are not sufficiently confident In its correctness to present ft at this point. Appendix:
null
null
null
null
{ "paperhash": [ "webber|varieties_of_user_misconceptions:_detection_and_correction", "moore|the_role_of_logic_in_knowledge_representation_and_commonsense_reasoning", "reiter|circumscription_implies_predicate_completion_(sometimes)", "mays|monitors_as_responses_to_questions:_determining_competence", "mcdermott|a_temporal_logic_for_reasoning_about_processes_and_plans", "mays|failures_in_natural_language_systems:_applications_to_data_base_query_systems", "gabbay|on_the_temporal_analysis_of_fairness" ], "title": [ "Varieties of User Misconceptions: Detection and Correction", "The Role of Logic in Knowledge Representation and Commonsense Reasoning", "Circumscription Implies Predicate Completion (Sometimes)", "Monitors as Responses to Questions: Determining Competence", "A Temporal Logic for Reasoning About Processes and Plans", "Failures in Natural Language Systems: Applications to Data Base Query Systems", "On the temporal analysis of fairness" ], "abstract": [ "This paper discusses some of our research into detecting and reconciling critical differences between a user's view of the world and the system's. We feel there is benefit to be gained by separating misconceptions into two main classes: misconce.pt ions about what is the case and misconceptions about what can be the case. We review some initial work in both areas and discuss our work in progress.", "This paper examines the role that formal logic ought to play in representing and reasoning with commonsense knowledge. We take issue with the commonly held view (as expressed by Newell [1980]) that the use of representations based on formal logic is inappropriate in most applications of artificial intelligence. We argue to the contrary that there is an important set of issues, involving incomplete knowledge of a problem situation, that so far have been addressed only by systems based on formal logic and deductive inference, and that, in some sense, probably can be dealt with only by systems based on logic and deduction. We further argue that the experiments of the late 1960s on problem-solving by theorem-proving did not show that the use of logic and deduction in AI systems was necessarily inefficient, but rather that what was needed was better control of the deduction process, combined with more attention to the computational properties of axioms.", "Predicate completion is an approach to closed world reasoning which assumes that the given sufficient conditions on a predicate are also necessary. Circumscription is a formal device characterizing minimal reasoning i.e. reasoning in minimal models, and is realized by an axiom schema. The basic result of this paper is that for first order theories which are Horn in a predicate P, the circumscription of P logically implies P's completion axiom.", "This paper discusses the application of a propositional temporal logic to determining the competence of a monitor offer as an extended response by a question-answering system. Determining monitor competence involves reasoning about the possibility of some future state given a description of the current state and possible transitions.", "Much previous work in artificial intelligence has neglected representing time in all its complexity. In particular, it has neglected continuous change and the indeterminacy of the future. To rectify this, I have developed a first-order temporal logic, in which it is possible to name and prove things about facts, events, plans, and world histories. In particular, the logic provides analyses of causality, continuous change in quantities, the persistence of facts (the frame problem), and the relationship between tasks and actions. It may be possible to implement a temporal-inference machine based on this logic, which keeps track of several “maps” of a time line, one per possible history.", "A significant class of failures in interactions with data base query systems are attributable to misconceptions or incomplete knowledge regarding the domain of discourse on the part of the user. This paper describes several types of user failures, namely, intensional failures of presumptions. These failures are distinguished from extensional failures of presumptions since they are dependent on the structure rather than the contents of the data base. A knowledge representation has been developed for the recognition of intensional failures that are due to the assumption of non-existent relationships between entities. Several other intensional failures which depend on more sophisticated knowledge representations are also discussed. Appropriate forms of corrective behavior are outlined which would enable the user to formulate queries directed to the solution of his/her particular task and compatible with the knowledge organization.", "The use of the temporal logic formalism for program reasoning is reviewed. Several aspects of responsiveness and fairness are analyzed, leading to the need for an additional temporal operator: the 'until' operator -U. Some general questions involving the 'until' operator are then discussed. It is shown that with the addition of this operator the temporal language becomes expressively complete. Then, two deductive systems DX and DUX are proved to be complete for the languages without and with the new operator respectively." ], "authors": [ { "name": [ "B. Webber", "E. Mays" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Robert C. Moore" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Reiter" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "E. Mays" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. McDermott" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "E. Mays" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Gabbay", "A. Pnueli", "S. Shelah", "J. Stavi" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null ], "s2_corpus_id": [ "23526234", "8544768", "3226826", "33799159", "12462775", "1296574", "15116302" ], "intents": [ [], [], [], [], [], [], [] ], "isInfluential": [ false, false, false, false, false, false, false ] }
null
500
0.03
null
null
null
null
null
null
null
null
b2d960f7452dd4fa161a61a4a07aa4c8d8878345
17436634
null
Using λ-Calculus to Represent Meanings in Logic Grammars
This paper descrlbes how meanings are represented in a semantic grammar for a fragment of English in the logic programming language Prolog. The conventions of Definite Clause Grammars are used. Previous work on DCGs with a semantic component has used essentially first-order formulas for representing meanings. The system described here uses formulas of the typed ~-calculus. The first section discusses general issues concerning the use of first-order logic or the h-calculus to represent meanings, The second section describes how h-calculus meaning representations can be constructed and manipulated directly in Prolog. This 'programmed' representation motivates a suggestion, discussed in the third section, for an extension to Prolog so that the language itself would include a mechanism for handling the ~-formulas directly.
{ "name": [ "Warren, David Scott" ], "affiliation": [ null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
22
12
null
The initial phase of most computer programs for processing natural language is a translation system.This phase takes the English text input and transforms it into structures in some internal meaning-representation language.Most of these systems fall into one of two groups: those that use a variant of first-order logic (FOL) as their representation language, and those that use the typed h-calculus (LC) for their representation language.(Systems based'on semantic nets or conceptual dependency structures would generally be calsslfied as using variants of FOL, but see [Jones and Warren, 1982] for an approach that views them as LC-based.)The system considered here are several highly formalized grammar systems that concentrate on the translation of sentences of logical form. The first-order logic systems are exemplified by those systems that have developed around (or gravitated to) logic programming, and the Prolog language in particular.These include the systems described ill [Colmerauer 1982] , [Warren 1981] , [Dahl 1981] , [Simmons and Chester 1982] , and [McCord 1982] . The systems using the ~-calculus are those that developed out of the work of Richard Montague. They include the systems described in [Montague 1973] , [Gawron et al. 1982] , [Rosenschein and Sheiber 1982] , [Schubert and Pelletier 1982] , and [Warren and Friedman 1981] .For the purposes of this paper, no distinction is made between the intensional logic of Montague grammar and the typed h-calculus.There is a mapping from intensional logic to a subset of a typed h-calculus [Gallin 1975] , [Clifford 1981 ] that shows they are essentially equivalent in expressive power.All these grammar systems construct a formula to represent the meaning of a sentence compositionally over the syntax tree for the sentence. They all use syntax directed translation. This is done by first associating a meaning structure with each word.Then phrases are constructed by syntactically combining smaller phrases together using syntactic rules. Corresponding to each syntactic rule is a semantic rule, that forms the meaning structure for a compound phrase by combinging the meanin~ structures of the component phrases. This is clearly and explicitly the program used in Montague grammar.It is also the program used in Prolog-based natural language grammars with a semantic component; the Prolog language itself essentially forces this methodology.Let us consider more carefully the meaning structures for the two classes of systems of interest here: those based on FOL and those based on LC.Each of the FOL systems, given a declarative sentence as input, produces a well-formed formula in a first-order logic to represent the meaning of the sentence. This meaning representation lo~ic will be called the MRFOL.The MILFOL has an intended interpretation based on the real world. For example, individual variables range over objects in the world and unary predicate symbols are interpreted as properties holding of those real world objects.As a particular recent example, consider Dahl's system [1981] .Essentially the same approach was used in the Lunar System [Woods, et al. 1972] .For the sentence 'Every man walks', Dahl's system would produce the expression: for(X,and(man(X),not walk(X)), equal(card(X),0)) where X is a variable that ranges over real-world individuals.This is a formula in Dahl's MRFOL, and illustrates her meaning representation language.The formula can be paraphrased as "the X's which man is true of and walk is not true of have ¢ardinality zero."It is essentially first-order because the variables range over individuals. (There would need to be some translation for the card function to work correctly.)This example also shows how Dahl uses a formula in her MRFOL as the meaning structure for a declarative sentence. The meaning of the English sentence is identified with the meaning that the formula has in the intended interpretations for the MRFOL.Consider mow the meaning structure Dahl uses for phrases of a category other than sentence, a noun phrase, for example.For the meaning of a noun phrase, Dahl uses a structure consisting of three components: a variable, and two 'formulas'. As an example, the noun phrase 'every man' has the following triple for its meaning structure:[X1,X/,for(Xl,and(man(Xl),not(X2)), eqnal(card(Xl),0))].We can understand this structure informally by thinking of the third component as representing the meaning of 'every man'. It is an object that needs a verb phrase meaning in order to become a sentence.The X2 stands for that verb-phrase meaning.For example, during constz~ction of the meaning of a sentence containing this noun phrase as the subject, the meaning of the verb-phrase of the sentence will be bound to X2. Notice that the components of this meaning structure are not themselves formulas in the MRFOL.They look very much like FOL formulas that represent meanings, but on closer inspection of the variables, we find that they cannot be. X2 in the third component is in the position of a formula, not a term; 'not' applies to truth values, not to individuals. Thus X2 cannot be a variable in the M1%FOL, because X2 would have to vary over truth values, and all FOL variables vary over individuals.So the third Component is not itself a MIRFOL formula that (in conjunction with the first two components) represents the meaning of the noun phrase, 'every man'.The intuitive meaning here is clear. The third compdnent is a formula fragment that participates in the final formula ultimately representing the meaning of the entire sentence of which this phrase is a subpart.The way this fragment Darticipates is indicated in part by the variable X2. It is important to notice that X2 is, in fact, a syntactic variable that varies over formulas, i,e., it varies over certain terms in the MRFOL.X2 will have as its value a formula with a free variable in it: a verb-phrase waiting for a subject.The X1 in the first component indicates what the free variable must become to match this noun phrase correctly.Consider the operation of putting XI into the verb-phrase formula and this into the noun-phrase formula when a final sentence meaning is constructed.In whatever order this is done, there must be an operation of substitution a formula with a free variable (XI) in it, into the scope of a quantifier ('for') that captures it. Semantically this is certainly a dubious operation.The point here is not that this system is wrong or necessarily deficient.Rather the representation language used to represent meanings for subsentential components is not precisely the MRFOL.Meaning structures built fo~ subcomponents are, in general, fra~rments of first-order formulas with some extra notation to be used in further formula construction.This means, in general, that the meanings of subsentential phrases are not given a semantles by first-order model theory; the meanings of intermediate phrases are (as far as traditional first-order logic is concerned) merely uninterpreted data structures.The point is that the system is building terms, syntactic objects, that will eventually be put together to represent meanings of sentences. This works because these terms, the ones ultimately associated with sentences, always turn out to be formulas in the MRFOL in just the right way.However, some of the terms it builds on the way to a sentence, terms that correspond to subcomponents of the sentence, are not in the MRFOL, and so do not have a interpretation in its real world model. Next let us move to a consideration of those systems which use the typed l-calculus (LC) as their meaning representation language.Consider again the simple sentence 'Every man walks'.The grammar of [Montague 1973] associates with this sentence the meaning:forail(X,implies(man(X),waik(X))) (We use an extensional fragment here for simplicity.)This formula looks very much like the firstorder formula given above by the Dahl system for the same sentence.This formula, also, is a formula of the typed X-calculus (FOL is a subset of LC).Now consider a noun phrase and its associated meaning structure in the LC framework.For 'every man' the meanin~ structure is:X(P,forall(X,implies(man(X),P(X))))This meaning structure is a formula in the kcalculus.As such it has an interpretation in the intended model for the LC, just as any other formula in the language has.This interpretation is a function from properties to truth-values; it takes properties that hold of every man to 'true' and all other properties to 'false '. This shows that in the LC framework, sentences and subsentential phrases are given meanings in the same way, whereas in FOL systems only the sentences have meanings.Meaning structures for sentences are well-formed LC formulas of type truth-value; those for other phrases are well-formed LC terms of other types.Consider this k-formula for 'every man' and compare it with the three-tuple meaning structure built for it in the Dahl system.The ~-variable P plays a corresponding role to the X2 variable of the triple; its ultimate value comes from a verbphrase meaning encountered elsewhere in the sentence.First-order logic is not quite expressive enough to represent directly the meanings of the categories of phrases that can be subcomponents of sentences.In systems based on first-order logic, this limitation is handled by explicitly constructing fragments of formulas, with extra notation to indicate how they must later combine with other fragments to form a true first-order formula that correctly represents the meaning of the entire sentence.In some sense the construction of the semantic representation is entirely syntactic until the full sentence meaning structure is constructed, at which point it comes to a form that does have a semantic interpretation.In contrast, in systems that use the typed l-calculus, actual formulas of the formal language are used at each step, the language of the l-calculus is never left, and the building of the semantic representation can actually be understood as operations on semantic objects.The general idea of how to handle the example sentence 'Every man walks' in the two systems is essentially the same.The major difference is how this idea is expressed in the available languages. The LC system can express the entire idea in its meaning representation language, because the typed l-calculus is a more expressive language.The obvious question to ask is whether there is any need for semantically interpretable meaning representations at the subsentential level.One important reason is that to do formal deduction on subsentential components, their meanings must be represented in a formal meaning representation language.LC provides such a language and FOL does not.And one thing the field seems to have learned from experience in natural language processing is that inferencing is useful at all levels of processing, from words to entire texts. This points us toward something like the LC.The problem, of course, is that because the LC is so expressive, deduction in the full LC is extremely difficult.Some problems which are decidable in FOL become undecidable in the l-calculus; some problems that are semi-decidable in FOL do not even have partial decision procedures in the LC. It is certainly clear that each language has limitations; the FOL is not quite expressive enough, and the LC is much too powerful.With this in mind, we next look at some of the implications of trying to use the LC as the meanin~ representation language in a Proiog system.LC IN PROLOG PROLO~ is extremely attractive as a lan~uaFe for expressinE grammars. ~tamorphosis ~rammars [Colmerauer 197g ] and Definite Clause Grammars (DCGs) [Pereira and ICarren 1980] are essentially conventions for representing grammars as logic programs.DCGs can perhaps most easily be understood as an improved cersion of the Augmented Transition Network language [Woods 1970] .Other work on natural language in the PROLOG framework has used firs$-order meaning representation languages.The rest of this paper explores the implications of using the l-calculus as the meaning representation language for a system written in PROLOG using the DCG conventions.The followin~ paragraphs describe a system that includes a very small grammar.The point of this system is to investigate the use of PROLOG to construct meanings with the %-calculus as the meaning representation language, and not to explore questions of linRulstic coverage.The grammar is based on the grammar of [Montague 1973 ], but is entirely extensional.Including intensionality would present no new problems in principle.The idea is very simple.Each nonterminal in the grammar becomes a three-place predicate in the Prolog program.The second and third places indicate locations in the input string, and are normally suppressed when DCGs are displayed. The first piece is the LC formula representing the meaning of the spanned syntactic component.The crucial decision is how to represent variables in the h-formulas.One 'pure' way is to use a Prolog function symbol, say ivar, of one argument, an integer.Then Ivar(37) would represent a l-variable.For our purposes, we need not explicitly encode the type of %-terms, since aii the formulas that are constructed are correctly typed.For other purposes it might be desirable to encode explicitly the type in a second argument of ivar.Constants could easily be represented using another function symbol, icon.Its first argument would identify the constant.A second argument could encode its type, if desired.Application of a l-term to another is represented using the Prolog function symbol lapply, which has two argument places, the first for the function term, the second for the argument term.Lambda abstraction is represented using a function symbol ~ with two arguments: the ~-variable, and the function body.Other commonly used connectives, such as 'and' and 'or', are represented by similarly named function symbols with the appropriate number of argument places.With this encoding scheme, the h-term:%P(3x(man(x) & P(x))would be represented by the (perhaDs somewhat awkward-looking) Prolo~ term: lambda(Ivar(3),Ithereis(ivar(1),land( lapply(icon(man),l~r(1)) lapply(ivar(3),ivar(1)) ))) ~-reduction would be coded as a predicate ireduce (Form, Reduced), whose first argument is an arbitrary %-formula, and second is its ~-reduced form.This encoding requires one to generate new variables to create variants of terms in order to avoid collisions of %-variables.The normal way to avoid collisions is with a global 'gensym' counter, to insure the same variable is never used twice.One way to do this in Prolog is to include a place for the counter in each grarmnar predicate. This can be done by including a parameter which will always be of the form gensym(Left,Right), where Left is the value of the gensym counter at the left end of the phrase spanned by the predicate and Right is the value at the right end. Any use of a k-variable in building a l-formula uses the counter and bumps it.An alternative and more efficient way to encode k-terms as Prolog terms involves using Prolog variables for l-variables. This makes the substitution trival, essentially using Prolog's built-ln facility for manipulating variables.It does, however, require the use of Prolog's meta-logical predicate var to test whether a Prolog variable is currently instantiated to a variable. This is necessary to prevent the k-varlables from being used by Prolog as Prolog variables,In the example below, we use Prolog variables for X-varlables and also modify the Icon function encoding of con-s=ants, and let constants stand for themselves. This results in a need to use the meta-logical predicate atom. This encodin E scheme might best be considered as an efficiency hack to use Prolog's built-in variable-handllng facilities to speed the A-reduction.We give below the Prolog program that represents a small example grammar with a few rules. This shows how meaning structures can be represented as l-formulas and manipulated in Prolog. Notice the simple, regular structure of the rules. Each consists of a sequence of grammar predicates that constructs the meanings of the subcomponents, followed by an instance of the ireduce predicate that constructs the compound meaning from the component meanings and l-reduces the result. The syntactic manipulation of the formulas, which results for example in the relatively simple formula for the sentence 'Every man walks' shown above, is done in the h-reductlon performed by the ireduce predicate. First, neither of the suggested implementations of X-reduction in Prolog are particularly attractive.The first, which uses first-order constants to represent variables, requires the addition of a messy gensym argument place to every predicate to simulate the global counter, This seems both inelegant and a duplication of effort, since the Prolog interpreter has a similar kind of variable-handling mechanism built into it. The second approach takes advantage of Prolog's builtin variable facilities, but requires the use of Prolog's meta-logical facilities to do so. This is because Prolog variables are serving two functions, as Prolog varlabies and as h-variables. The two kinds of variables function differently and must be differentiated.Second, there is a problem with invertibility. Many Prolog programs are invertible and may be run 'backwards'. We should be able, for example, to evaluate the sentence grammar predicate giving the meaning of a sentence and have the system produce the sentence itself. This ability to go from a meaning formula back to an English phrase that would produce it is one of the attractive properties of logic grammars. The grammar presented here can also be run this way. However, a careful look at this computation process reveals that with this implementation the Prolog interpreter performs essentially an exhaustive search.It generates every subphrase, h-reduces it and checks to see if it has the desired meaning.Aside from being theoretically unsatisfactory, for a grammar much larger than a trivially-small one, this approach would not be computationally feasible.So the question arises as to whether the Prolog interpreter might be enhanced to know about l-formulas andmanipulate them directly.Then the Prolog interpreter itself would handle the X-reduction and would be responsible for avoiding variable collisions. The logic grammars would look even simpler because the ireduce predicate would not need to be explicitly included in each grammar rule. For example, the ts clause in the grammar in the figure above would become: ts(lapply(MI,M2),X,Y) te(MI,X,Z), iv(M2,Z,Y).Declarations to the Prolog interpreter could be included to indicate the predicate argument places that contain l-terms. Consider what would be involved in this modification to the Prolog system. It might seem that all that is required is just the addition of a l-reduction operator applied to l-arguments. And indeed when executing in the forward direction, this is essentially all that is involved.Consider what happens, however, if we wish to execute the grammar in the reverse direction, i.e., give a l-term that is a meaning, and have the Prolog system find the English phrase that has that meaning. Now we find the need for a 'l-expansion' ability.Consider the situation in which we present Prolog with the following goal: ts(forall(X,implies(man(X),walk(X))),S,[]).Prolog would first try to match it with $he head of the ts clause given above. This would require matching the first terms, i.e., forall(X,implies(lapply(man,X),lapply(walk,X))) and lapply(Mi,M2) (using our encoding of l-terms as Prolog terms.) The marcher would have available the types of the variables and terms. We would like it to be able to discover that by substituting the right terms for the variables, in particular substituting lambda(P,forall(X,implies( lapply(man,X),lapply(P,X)))) and walk for M2for M1in the second term, it becomes the same as the first term (after reduction). These MI and M2 values would then be passed on to the te and iv predicates. The iv predicate, for example, can easily find in the facts the word to express the meaning of the term, walk; it is the work 'walks' and is expressed by the fact iv(walk,[walksIX],X), shown above. For the predicate re, given the value of MI, the system would have to match it against the head of the te clause and then do further computation to eventually construct the sentence.~at we require is a general algorithm for matching l-terms. Just as Prolog uses unification of first-order terms for its parameter mechanism, to enhance Prolog to include l-terms, we need general unification of l-~erms. The problem is that l-unlficatlon is much more complicated than first-order unification. For a unifiable pair of first-order terms, there exists a unique (up to change of bo~md variable) most general unifier (mgu) for them.In the case of l-terms, this is not true; there may be many unifiers, which are not generalizations of one another. Furthermore unification of l-terms is, in general, undecidable. These facts in themselves, while perhaps discouraging, need not force us to abandon hope. The fact that there is no unique mgu just contributes another place for nondeterminism to the Prolog interpreter. And all interpreters which have the power of a universal Turing machine have undecidable properties.Perhaps another source of undecidability can be accommodated. Huet [197~] ',-s given a semi-decision procedure for unification in the typed l-calculus. The question of whether this approach is feasible really comes down to the finer properties of the unification procedure.It seems not unreasonable to hope that in the relatively simple cases we seem to have in our grammars, this procedure can be made to perform adequately. Notice that, for parsing in the forward direction, the system will always be unifying a l-term with a variable, in which case the unification problem is trivial. We are in the process of programming Huet's algorithm to include it in a simple Prologlike interpreter. We intend to experiment with it to see how it performs on the l-terms used to represent meanings of natural language expressions. Warren [1982] points out how some suggestions for incorporating l-calculus into Prolog are motivated by needs that can easily and naturally be met in Prolog itself, unextended.Following his suggestions for how to represent l-expressions in in Prolo~ directly, we would represent the meaning of a sentence by a set of asserted Prolog clauses and an encoding atomic name, which would have to be generated. While this might be an interesting alternate approach to meaning representations, it is quite different from the ones discussed here.We have discussed two alternatives for meaning representation languages for use in the context of lo~ic grammars. We pointed out how one advantage of the typed l-calculus over first-order logic is its ability to represent directly meanings of phrases of all syntactic cateBories.We then showed how we could implement in Prolog a logic grammar using the l-calculus as the meaning representation languaEe.Finally we discussed the possibility and some of the implications of trying to include part of the l-calculus in the logic programming system itself. We suggested how such an integration might allow grammars to be executed backwards, generating English sentences from input logical forms. ~ intend to explore this further in future work. If the l-calculus can be smoothly incorporated in the way suggested, then natural language grammar writers will find themselves 'programming' in two languages, the first-order language (e.g. Prolog) for syntax, and the typed l-calculus (e.g. typed LISP) for semantics.As a final note regarding meaning representation languages: we are still left with the feeling that the first-order languages are too weak to express the meanings of phrases of all categories, and that the l-calculus is too expressive to be computatlonally tractable. There is a third class of languages that holds promise of solving both these difficulties, the function-level languages that have recently been developed in the area of progranm~ing languages [Backus 1978 [Backus ] [$hultis 1982 . These languages represent functions of various types and thus can be used to represent the meanings of subsentential phrases in a way similar to the l-calculus. Deduction in these languages is currently an active area of research and much is beginning to be known about their algebraic properties. Term rewriting systems seem to be a powerful tool for reasoning in these languages. I would not be surprised if these functlon-level languages were to strongly influence the formal meaning representation languages of the future.
null
null
null
null
Main paper: : The initial phase of most computer programs for processing natural language is a translation system.This phase takes the English text input and transforms it into structures in some internal meaning-representation language.Most of these systems fall into one of two groups: those that use a variant of first-order logic (FOL) as their representation language, and those that use the typed h-calculus (LC) for their representation language.(Systems based'on semantic nets or conceptual dependency structures would generally be calsslfied as using variants of FOL, but see [Jones and Warren, 1982] for an approach that views them as LC-based.)The system considered here are several highly formalized grammar systems that concentrate on the translation of sentences of logical form. The first-order logic systems are exemplified by those systems that have developed around (or gravitated to) logic programming, and the Prolog language in particular.These include the systems described ill [Colmerauer 1982] , [Warren 1981] , [Dahl 1981] , [Simmons and Chester 1982] , and [McCord 1982] . The systems using the ~-calculus are those that developed out of the work of Richard Montague. They include the systems described in [Montague 1973] , [Gawron et al. 1982] , [Rosenschein and Sheiber 1982] , [Schubert and Pelletier 1982] , and [Warren and Friedman 1981] .For the purposes of this paper, no distinction is made between the intensional logic of Montague grammar and the typed h-calculus.There is a mapping from intensional logic to a subset of a typed h-calculus [Gallin 1975] , [Clifford 1981 ] that shows they are essentially equivalent in expressive power.All these grammar systems construct a formula to represent the meaning of a sentence compositionally over the syntax tree for the sentence. They all use syntax directed translation. This is done by first associating a meaning structure with each word.Then phrases are constructed by syntactically combining smaller phrases together using syntactic rules. Corresponding to each syntactic rule is a semantic rule, that forms the meaning structure for a compound phrase by combinging the meanin~ structures of the component phrases. This is clearly and explicitly the program used in Montague grammar.It is also the program used in Prolog-based natural language grammars with a semantic component; the Prolog language itself essentially forces this methodology.Let us consider more carefully the meaning structures for the two classes of systems of interest here: those based on FOL and those based on LC.Each of the FOL systems, given a declarative sentence as input, produces a well-formed formula in a first-order logic to represent the meaning of the sentence. This meaning representation lo~ic will be called the MRFOL.The MILFOL has an intended interpretation based on the real world. For example, individual variables range over objects in the world and unary predicate symbols are interpreted as properties holding of those real world objects.As a particular recent example, consider Dahl's system [1981] .Essentially the same approach was used in the Lunar System [Woods, et al. 1972] .For the sentence 'Every man walks', Dahl's system would produce the expression: for(X,and(man(X),not walk(X)), equal(card(X),0)) where X is a variable that ranges over real-world individuals.This is a formula in Dahl's MRFOL, and illustrates her meaning representation language.The formula can be paraphrased as "the X's which man is true of and walk is not true of have ¢ardinality zero."It is essentially first-order because the variables range over individuals. (There would need to be some translation for the card function to work correctly.)This example also shows how Dahl uses a formula in her MRFOL as the meaning structure for a declarative sentence. The meaning of the English sentence is identified with the meaning that the formula has in the intended interpretations for the MRFOL.Consider mow the meaning structure Dahl uses for phrases of a category other than sentence, a noun phrase, for example.For the meaning of a noun phrase, Dahl uses a structure consisting of three components: a variable, and two 'formulas'. As an example, the noun phrase 'every man' has the following triple for its meaning structure:[X1,X/,for(Xl,and(man(Xl),not(X2)), eqnal(card(Xl),0))].We can understand this structure informally by thinking of the third component as representing the meaning of 'every man'. It is an object that needs a verb phrase meaning in order to become a sentence.The X2 stands for that verb-phrase meaning.For example, during constz~ction of the meaning of a sentence containing this noun phrase as the subject, the meaning of the verb-phrase of the sentence will be bound to X2. Notice that the components of this meaning structure are not themselves formulas in the MRFOL.They look very much like FOL formulas that represent meanings, but on closer inspection of the variables, we find that they cannot be. X2 in the third component is in the position of a formula, not a term; 'not' applies to truth values, not to individuals. Thus X2 cannot be a variable in the M1%FOL, because X2 would have to vary over truth values, and all FOL variables vary over individuals.So the third Component is not itself a MIRFOL formula that (in conjunction with the first two components) represents the meaning of the noun phrase, 'every man'.The intuitive meaning here is clear. The third compdnent is a formula fragment that participates in the final formula ultimately representing the meaning of the entire sentence of which this phrase is a subpart.The way this fragment Darticipates is indicated in part by the variable X2. It is important to notice that X2 is, in fact, a syntactic variable that varies over formulas, i,e., it varies over certain terms in the MRFOL.X2 will have as its value a formula with a free variable in it: a verb-phrase waiting for a subject.The X1 in the first component indicates what the free variable must become to match this noun phrase correctly.Consider the operation of putting XI into the verb-phrase formula and this into the noun-phrase formula when a final sentence meaning is constructed.In whatever order this is done, there must be an operation of substitution a formula with a free variable (XI) in it, into the scope of a quantifier ('for') that captures it. Semantically this is certainly a dubious operation.The point here is not that this system is wrong or necessarily deficient.Rather the representation language used to represent meanings for subsentential components is not precisely the MRFOL.Meaning structures built fo~ subcomponents are, in general, fra~rments of first-order formulas with some extra notation to be used in further formula construction.This means, in general, that the meanings of subsentential phrases are not given a semantles by first-order model theory; the meanings of intermediate phrases are (as far as traditional first-order logic is concerned) merely uninterpreted data structures.The point is that the system is building terms, syntactic objects, that will eventually be put together to represent meanings of sentences. This works because these terms, the ones ultimately associated with sentences, always turn out to be formulas in the MRFOL in just the right way.However, some of the terms it builds on the way to a sentence, terms that correspond to subcomponents of the sentence, are not in the MRFOL, and so do not have a interpretation in its real world model. Next let us move to a consideration of those systems which use the typed l-calculus (LC) as their meaning representation language.Consider again the simple sentence 'Every man walks'.The grammar of [Montague 1973] associates with this sentence the meaning:forail(X,implies(man(X),waik(X))) (We use an extensional fragment here for simplicity.)This formula looks very much like the firstorder formula given above by the Dahl system for the same sentence.This formula, also, is a formula of the typed X-calculus (FOL is a subset of LC).Now consider a noun phrase and its associated meaning structure in the LC framework.For 'every man' the meanin~ structure is:X(P,forall(X,implies(man(X),P(X))))This meaning structure is a formula in the kcalculus.As such it has an interpretation in the intended model for the LC, just as any other formula in the language has.This interpretation is a function from properties to truth-values; it takes properties that hold of every man to 'true' and all other properties to 'false '. This shows that in the LC framework, sentences and subsentential phrases are given meanings in the same way, whereas in FOL systems only the sentences have meanings.Meaning structures for sentences are well-formed LC formulas of type truth-value; those for other phrases are well-formed LC terms of other types.Consider this k-formula for 'every man' and compare it with the three-tuple meaning structure built for it in the Dahl system.The ~-variable P plays a corresponding role to the X2 variable of the triple; its ultimate value comes from a verbphrase meaning encountered elsewhere in the sentence.First-order logic is not quite expressive enough to represent directly the meanings of the categories of phrases that can be subcomponents of sentences.In systems based on first-order logic, this limitation is handled by explicitly constructing fragments of formulas, with extra notation to indicate how they must later combine with other fragments to form a true first-order formula that correctly represents the meaning of the entire sentence.In some sense the construction of the semantic representation is entirely syntactic until the full sentence meaning structure is constructed, at which point it comes to a form that does have a semantic interpretation.In contrast, in systems that use the typed l-calculus, actual formulas of the formal language are used at each step, the language of the l-calculus is never left, and the building of the semantic representation can actually be understood as operations on semantic objects.The general idea of how to handle the example sentence 'Every man walks' in the two systems is essentially the same.The major difference is how this idea is expressed in the available languages. The LC system can express the entire idea in its meaning representation language, because the typed l-calculus is a more expressive language.The obvious question to ask is whether there is any need for semantically interpretable meaning representations at the subsentential level.One important reason is that to do formal deduction on subsentential components, their meanings must be represented in a formal meaning representation language.LC provides such a language and FOL does not.And one thing the field seems to have learned from experience in natural language processing is that inferencing is useful at all levels of processing, from words to entire texts. This points us toward something like the LC.The problem, of course, is that because the LC is so expressive, deduction in the full LC is extremely difficult.Some problems which are decidable in FOL become undecidable in the l-calculus; some problems that are semi-decidable in FOL do not even have partial decision procedures in the LC. It is certainly clear that each language has limitations; the FOL is not quite expressive enough, and the LC is much too powerful.With this in mind, we next look at some of the implications of trying to use the LC as the meanin~ representation language in a Proiog system.LC IN PROLOG PROLO~ is extremely attractive as a lan~uaFe for expressinE grammars. ~tamorphosis ~rammars [Colmerauer 197g ] and Definite Clause Grammars (DCGs) [Pereira and ICarren 1980] are essentially conventions for representing grammars as logic programs.DCGs can perhaps most easily be understood as an improved cersion of the Augmented Transition Network language [Woods 1970] .Other work on natural language in the PROLOG framework has used firs$-order meaning representation languages.The rest of this paper explores the implications of using the l-calculus as the meaning representation language for a system written in PROLOG using the DCG conventions.The followin~ paragraphs describe a system that includes a very small grammar.The point of this system is to investigate the use of PROLOG to construct meanings with the %-calculus as the meaning representation language, and not to explore questions of linRulstic coverage.The grammar is based on the grammar of [Montague 1973 ], but is entirely extensional.Including intensionality would present no new problems in principle.The idea is very simple.Each nonterminal in the grammar becomes a three-place predicate in the Prolog program.The second and third places indicate locations in the input string, and are normally suppressed when DCGs are displayed. The first piece is the LC formula representing the meaning of the spanned syntactic component.The crucial decision is how to represent variables in the h-formulas.One 'pure' way is to use a Prolog function symbol, say ivar, of one argument, an integer.Then Ivar(37) would represent a l-variable.For our purposes, we need not explicitly encode the type of %-terms, since aii the formulas that are constructed are correctly typed.For other purposes it might be desirable to encode explicitly the type in a second argument of ivar.Constants could easily be represented using another function symbol, icon.Its first argument would identify the constant.A second argument could encode its type, if desired.Application of a l-term to another is represented using the Prolog function symbol lapply, which has two argument places, the first for the function term, the second for the argument term.Lambda abstraction is represented using a function symbol ~ with two arguments: the ~-variable, and the function body.Other commonly used connectives, such as 'and' and 'or', are represented by similarly named function symbols with the appropriate number of argument places.With this encoding scheme, the h-term:%P(3x(man(x) & P(x))would be represented by the (perhaDs somewhat awkward-looking) Prolo~ term: lambda(Ivar(3),Ithereis(ivar(1),land( lapply(icon(man),l~r(1)) lapply(ivar(3),ivar(1)) ))) ~-reduction would be coded as a predicate ireduce (Form, Reduced), whose first argument is an arbitrary %-formula, and second is its ~-reduced form.This encoding requires one to generate new variables to create variants of terms in order to avoid collisions of %-variables.The normal way to avoid collisions is with a global 'gensym' counter, to insure the same variable is never used twice.One way to do this in Prolog is to include a place for the counter in each grarmnar predicate. This can be done by including a parameter which will always be of the form gensym(Left,Right), where Left is the value of the gensym counter at the left end of the phrase spanned by the predicate and Right is the value at the right end. Any use of a k-variable in building a l-formula uses the counter and bumps it.An alternative and more efficient way to encode k-terms as Prolog terms involves using Prolog variables for l-variables. This makes the substitution trival, essentially using Prolog's built-ln facility for manipulating variables.It does, however, require the use of Prolog's meta-logical predicate var to test whether a Prolog variable is currently instantiated to a variable. This is necessary to prevent the k-varlables from being used by Prolog as Prolog variables,In the example below, we use Prolog variables for X-varlables and also modify the Icon function encoding of con-s=ants, and let constants stand for themselves. This results in a need to use the meta-logical predicate atom. This encodin E scheme might best be considered as an efficiency hack to use Prolog's built-in variable-handllng facilities to speed the A-reduction.We give below the Prolog program that represents a small example grammar with a few rules. This shows how meaning structures can be represented as l-formulas and manipulated in Prolog. Notice the simple, regular structure of the rules. Each consists of a sequence of grammar predicates that constructs the meanings of the subcomponents, followed by an instance of the ireduce predicate that constructs the compound meaning from the component meanings and l-reduces the result. The syntactic manipulation of the formulas, which results for example in the relatively simple formula for the sentence 'Every man walks' shown above, is done in the h-reductlon performed by the ireduce predicate. First, neither of the suggested implementations of X-reduction in Prolog are particularly attractive.The first, which uses first-order constants to represent variables, requires the addition of a messy gensym argument place to every predicate to simulate the global counter, This seems both inelegant and a duplication of effort, since the Prolog interpreter has a similar kind of variable-handling mechanism built into it. The second approach takes advantage of Prolog's builtin variable facilities, but requires the use of Prolog's meta-logical facilities to do so. This is because Prolog variables are serving two functions, as Prolog varlabies and as h-variables. The two kinds of variables function differently and must be differentiated.Second, there is a problem with invertibility. Many Prolog programs are invertible and may be run 'backwards'. We should be able, for example, to evaluate the sentence grammar predicate giving the meaning of a sentence and have the system produce the sentence itself. This ability to go from a meaning formula back to an English phrase that would produce it is one of the attractive properties of logic grammars. The grammar presented here can also be run this way. However, a careful look at this computation process reveals that with this implementation the Prolog interpreter performs essentially an exhaustive search.It generates every subphrase, h-reduces it and checks to see if it has the desired meaning.Aside from being theoretically unsatisfactory, for a grammar much larger than a trivially-small one, this approach would not be computationally feasible.So the question arises as to whether the Prolog interpreter might be enhanced to know about l-formulas andmanipulate them directly.Then the Prolog interpreter itself would handle the X-reduction and would be responsible for avoiding variable collisions. The logic grammars would look even simpler because the ireduce predicate would not need to be explicitly included in each grammar rule. For example, the ts clause in the grammar in the figure above would become: ts(lapply(MI,M2),X,Y) te(MI,X,Z), iv(M2,Z,Y).Declarations to the Prolog interpreter could be included to indicate the predicate argument places that contain l-terms. Consider what would be involved in this modification to the Prolog system. It might seem that all that is required is just the addition of a l-reduction operator applied to l-arguments. And indeed when executing in the forward direction, this is essentially all that is involved.Consider what happens, however, if we wish to execute the grammar in the reverse direction, i.e., give a l-term that is a meaning, and have the Prolog system find the English phrase that has that meaning. Now we find the need for a 'l-expansion' ability.Consider the situation in which we present Prolog with the following goal: ts(forall(X,implies(man(X),walk(X))),S,[]).Prolog would first try to match it with $he head of the ts clause given above. This would require matching the first terms, i.e., forall(X,implies(lapply(man,X),lapply(walk,X))) and lapply(Mi,M2) (using our encoding of l-terms as Prolog terms.) The marcher would have available the types of the variables and terms. We would like it to be able to discover that by substituting the right terms for the variables, in particular substituting lambda(P,forall(X,implies( lapply(man,X),lapply(P,X)))) and walk for M2for M1in the second term, it becomes the same as the first term (after reduction). These MI and M2 values would then be passed on to the te and iv predicates. The iv predicate, for example, can easily find in the facts the word to express the meaning of the term, walk; it is the work 'walks' and is expressed by the fact iv(walk,[walksIX],X), shown above. For the predicate re, given the value of MI, the system would have to match it against the head of the te clause and then do further computation to eventually construct the sentence.~at we require is a general algorithm for matching l-terms. Just as Prolog uses unification of first-order terms for its parameter mechanism, to enhance Prolog to include l-terms, we need general unification of l-~erms. The problem is that l-unlficatlon is much more complicated than first-order unification. For a unifiable pair of first-order terms, there exists a unique (up to change of bo~md variable) most general unifier (mgu) for them.In the case of l-terms, this is not true; there may be many unifiers, which are not generalizations of one another. Furthermore unification of l-terms is, in general, undecidable. These facts in themselves, while perhaps discouraging, need not force us to abandon hope. The fact that there is no unique mgu just contributes another place for nondeterminism to the Prolog interpreter. And all interpreters which have the power of a universal Turing machine have undecidable properties.Perhaps another source of undecidability can be accommodated. Huet [197~] ',-s given a semi-decision procedure for unification in the typed l-calculus. The question of whether this approach is feasible really comes down to the finer properties of the unification procedure.It seems not unreasonable to hope that in the relatively simple cases we seem to have in our grammars, this procedure can be made to perform adequately. Notice that, for parsing in the forward direction, the system will always be unifying a l-term with a variable, in which case the unification problem is trivial. We are in the process of programming Huet's algorithm to include it in a simple Prologlike interpreter. We intend to experiment with it to see how it performs on the l-terms used to represent meanings of natural language expressions. Warren [1982] points out how some suggestions for incorporating l-calculus into Prolog are motivated by needs that can easily and naturally be met in Prolog itself, unextended.Following his suggestions for how to represent l-expressions in in Prolo~ directly, we would represent the meaning of a sentence by a set of asserted Prolog clauses and an encoding atomic name, which would have to be generated. While this might be an interesting alternate approach to meaning representations, it is quite different from the ones discussed here.We have discussed two alternatives for meaning representation languages for use in the context of lo~ic grammars. We pointed out how one advantage of the typed l-calculus over first-order logic is its ability to represent directly meanings of phrases of all syntactic cateBories.We then showed how we could implement in Prolog a logic grammar using the l-calculus as the meaning representation languaEe.Finally we discussed the possibility and some of the implications of trying to include part of the l-calculus in the logic programming system itself. We suggested how such an integration might allow grammars to be executed backwards, generating English sentences from input logical forms. ~ intend to explore this further in future work. If the l-calculus can be smoothly incorporated in the way suggested, then natural language grammar writers will find themselves 'programming' in two languages, the first-order language (e.g. Prolog) for syntax, and the typed l-calculus (e.g. typed LISP) for semantics.As a final note regarding meaning representation languages: we are still left with the feeling that the first-order languages are too weak to express the meanings of phrases of all categories, and that the l-calculus is too expressive to be computatlonally tractable. There is a third class of languages that holds promise of solving both these difficulties, the function-level languages that have recently been developed in the area of progranm~ing languages [Backus 1978 [Backus ] [$hultis 1982 . These languages represent functions of various types and thus can be used to represent the meanings of subsentential phrases in a way similar to the l-calculus. Deduction in these languages is currently an active area of research and much is beginning to be known about their algebraic properties. Term rewriting systems seem to be a powerful tool for reasoning in these languages. I would not be surprised if these functlon-level languages were to strongly influence the formal meaning representation languages of the future. Appendix:
null
null
null
null
{ "paperhash": [ "jones|conceptual_dependency_and_montague_grammar:_a_step_toward_conciliation", "simmons|relating_sentences_and_semantic_networks_with_procedural_logic", "warren|using_semantics_in_non-context-free_parsing_of_montague_grammar", "rosenschein|translating_english_into_logical_form", "warren|efficient_processing_of_interactive_relational_data_base_queries_expressed_in_logic", "dahl|translating_spanish_into_logic_through_logic", "backus|can_programming_be_liberated_from_the_von_neumann_style?:_a_functional_style_and_its_algebra_of_programs", "shultis|hierarchical_semantics,_reasoning,_and_translation", "schubert|from_english_to_logic:_context-free_computation_of_‘conventional’_logical_translation", "montague|formal_philosophy;_selected_papers_of_richard_montague" ], "title": [ "Conceptual Dependency and Montague Grammar: A Step Toward Conciliation", "Relating sentences and semantic networks with procedural logic", "Using Semantics in Non-Context-Free Parsing of Montague Grammar", "Translating English Into Logical Form", "Efficient Processing of Interactive Relational Data Base Queries expressed in Logic", "Translating Spanish Into Logic Through Logic", "Can programming be liberated from the von Neumann style?: a functional style and its algebra of programs", "Hierarchical semantics, reasoning, and translation", "From English to Logic: Context-Free Computation of ‘Conventional’ Logical Translation", "Formal philosophy; selected papers of Richard Montague" ], "abstract": [ "In attempting to establish a common basis from which the approaches and results can be compared, we have taken a conciliatory attitude toward natural language research in the conceptual dependency (CD) paradigm and Montague Grammar (MG) formalism. Although these two approaches may seem to be strange bedfellows indeed with often noticeably different perspectives, we have observed many commonalities. We begin with a brief description of the problem view and ontology of each and then create a formulation of CD as logic. We then give \"conceptual\" MG translations for the words in an example sentence which we use in approximating a word-based parsing style. Finally, we make some suggestions regarding further extensions of logic to introduce higher level representations.", "A system of symmetric clausal logic axioms is shown to transform a thirteen-sentence narrative about a v-2 rocket flight into semantic case relations. The same axioms translate the case relations into english sentences. An approach to defining schemas in clausal logic is presented and applied in the form of a mini-flight schema to two paragraphs of the text to compute a partitioning of the semantic network into the causal organization of a flight. Properties of rule symmetry and network condensibility are noted to be of importance for natural language processing. Because of the conciseness of the logic interpreter and the clausal representation for grammars and schemes, it is concluded that the procedural logic approach provides an effective programming system that is promising for accomplishing natural language computations on mini- and microcomputers as well as on large mainframes. 29 references.", "In natural language processing, the question of the appropriate interaction of syntax and semantics during sentence analysis has long been of interest. Montague grammar with its fully formalized syntax and semantics provides a complete, well-defined context in which these questions can be considered. This paper describes how semantics can be used during parsing to reduce the combinatorial explosion of syntactic ambiguity in Montague grammar. A parsing algorithm, called semantic equivalence parsing, is presented and examples of its operation are given. The algorithm is applicable to general non-context-free grammars that include a formal semantic component. The second portion of the paper places semantic equivalence parsing in the context of the very general definition of an interpreted language as a homomorphism between syntactic and semantic algebras (Montague 1970).", "A scheme for syntax-directed translation that mirrors compositional model-theoretic semantics is discussed. The scheme is the basis for an English translation system called PATR and was used to specify a semantically interesting fragment of English, including such constructs as tense, aspect, modals, and various lexically controlled verb complement structures. PATR was embedded in a question-answering system that replied appropriately to questions requiring the computation of logical entailments.", "Relational database retrieval is viewed as a special case of deduction in logic. It is argued that expressing a query in logic clarifies the problems involved in processing it efficiently (\"query optimisationn). The paper describes a simple but effective strategy for planning a query so that it can be efficiently executed by the elementary deductive mechanism provided in the programming language Prolog. This planning algorithm has been implemented as part of a natural language question answering system, called Chat-80. The Chat-80 method of query planning and execution is compared with the strategies used in other relational database systems, particularly Ingres and System R.", "We discuss the use of logic for natural language (NL) processing, both as an internal query language and as a programming tool. Some extensions of standard predicate calculus are motivated by the first of these roles. A logical system including these extensions is informally described. It incorporates semantic as well as syntactic NL features, and its semantics in a given interpretation (or data base) determines the answer-extraction process. We also present a logic-programmed analyser that translates Spanish into this system. It equates semantic agreement with syntactic weil-formedness, and can detect certain presuppositions, resolve certain ambiguities and reflect relations among sets.", "Conventional programming languages are growing ever more enormous, but not stronger. Inherent defects at the most basic level cause them to be both fat and weak: their primitive word-at-a-time style of programming inherited from their common ancestor—the von Neumann computer, their close coupling of semantics to state transitions, their division of programming into a world of expressions and a world of statements, their inability to effectively use powerful combining forms for building new programs from existing ones, and their lack of useful mathematical properties for reasoning about programs.\nAn alternative functional style of programming is founded on the use of combining forms for creating programs. Functional programs deal with structured data, are often nonrepetitive and nonrecursive, are hierarchically constructed, do not name their arguments, and do not require the complex machinery of procedure declarations to become generally applicable. Combining forms can use high level programs to build still higher level ones in a style not possible in conventional languages.\nAssociated with the functional style of programming is an algebra of programs whose variables range over programs and whose operations are combining forms. This algebra can be used to transform programs and to solve equations whose “unknowns” are programs in much the same way one transforms equations in high school algebra. These transformations are given by algebraic laws and are carried out in the same language in which programs are written. Combining forms are chosen not only for their programming power but also for the power of their associated algebraic laws. General theorems of the algebra give the detailed behavior and termination conditions for large classes of programs.\n A new class of computing systems uses the functional programming style both in its programming language and in its state transition rules. Unlike von Neumann languages, these systems have semantics loosely coupled to states—only one state transition occurs per major computation.", "A framework is presented for the formal specification, analysis, and translation of programming languages. The framework is built around SSL, an algebraic metalanguage capable of expressing and manipulating semantics on many levels of abstraction. SSL supports a general method for determining the static properties of languages and their programs by computing with homomorphic images of their definitions. The use of SSL to define a programming language and derive algebraic proof rules for reasoning about the defined language is demonstrated. \nTranslation of programs written in languages that are defined in SSL is effected by transforming the programs's SSL semantics from a source-level dialect of SSL to a target-level dialect. The transformations are proved correct using the algebraic laws associated with SSL, and the applicability of transformations to particular programs is ascertained by use of the algebraic static analysis technique. Translations can be cascaded through a hierarchy of intermediate languages, each defined in SSL using the abstractions provided by lower levels of the hierarchy. Code improvement by transformation can be effected at each intermediate level.", "We describe an approach to parsing and logical translation that was inspired by Gazdar's work on context-free grammar for English. Each grammar rule consists of a syntactic part that specifies an acceptable fragment of a parse tree, and a semantic part that specifies how the logical formulas corresponding to the constituents of the fragment are to be combined to yield the formula for the fragment. However, we have sought to reformulate Gazdar's semantic rules so as to obtain more or less 'conventional' logical translations of English sentences, avoiding the interpretation of NPs as property sets and the use of intensional functors other than certain propositional operators. The reformulated semantic rules often turn out to be slightly simpler than Gazdar's. Moreover, by using a semantically ambiguous logical syntax for the preliminary translations, we can account for quantifier and coordinator scope ambiguities in syntactically unambiguous sentences without recourse to multiple semantic rules, and are able to separate the disambiguation process from the operation of the parser-translator. We have implemented simple recursive descent and left-corner parsers to demonstrate the practicality of our approach.", "Getting the books formal philosophy selected papers of richard montague now is not type of challenging means. You could not solitary going later than ebook buildup or library or borrowing from your friends to approach them. This is an agreed easy means to specifically get lead by online. This online broadcast formal philosophy selected papers of richard montague can be one of the options to accompany you as soon as having supplementary time." ], "authors": [ { "name": [ "M. Jones", "D. Warren" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Robert F. Simmons", "Daniel L. Chester" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Warren", "J. Friedman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Rosenschein", "Stuart M. Shieber" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Warren" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "V. Dahl" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Backus" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Jonathan C. Shultis" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Lenhart K. Schubert", "F. J. Pelletier" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Montague" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "39794097", "10437874", "7970879", "9564084", "42399194", "9013266", "16367522", "59891594", "17712124", "144783805" ], "intents": [ [ "background" ], [], [ "methodology" ], [], [], [ "background", "methodology" ], [ "background" ], [], [ "methodology" ], [] ], "isInfluential": [ false, false, false, false, false, false, false, false, false, false ] }
Problem: Investigating the representation of meanings in a semantic grammar for a fragment of English using the logic programming language Prolog. Solution: The hypothesis explores the implications of using the typed λ-calculus as the meaning representation language in a Prolog system for logic grammars, aiming to simplify the representation of meanings for subsentential components and enhance the inferencing capabilities at all levels of processing.
500
0.024
null
null
null
null
null
null
null
null
b2107890699b829004421b693cf10835c2fcca45
216848388
null
Formal Constraints on Metarules
Metagrammaticai formalisms that combine context-free phrase structure rules and metarules (MPS grammars) allow concise statement of generalizations about the syntax of natural languages. Unconstrained MPS grammars, tmfortunately, are not cornputationally "safe." We evaluate several proposals for constraining them, basing our amae~ment on computational tractability and explanatory adequacy. We show that none of them satisfies both criteria, and suggest new directions for research on alternative metagrammatical formalisms.
{ "name": [ "Shieber, Stuart M. and", "Stucky, Swan U. and", "Uszkoreit, Hans and", "Robinson, Jane J." ], "affiliation": [ null, null, null, null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
18
14
null
The computational-linguistics community has recently shown interest in a variety of metagrammatical formalisms for encoding grammars of natural language. A common technique found in these formalisms involves the notion of a metarule, which, in its most common conception, is a device used to generate grammar rules from other given grammar rules. 1 A metarule is essentially a statement declaring that, if a grammar contains rules that match one specified pattern, it also contains rules that match some other specified pattern. For example, the following metarule states that, if there is a rule that expands a finite VP into a finite auxiliary and a nonfinite VP, there will also be a rule that expands the VP as before except for an additional adverb between the auxiliary and the nnnfinite VP. 2 The patterns may contain variables, in which case they characterize "families ~ of related rules rather than individual pairs. *This reeearch was supported by the National Science Foundation grant No. IST-8103550. The views and conclusions expressed in this document are those of the authors and should not be interpreted as represent,.tive of the views of the National Science Foundation or the United States government. We are indebted to Fernando Pereira, Stanley Peters, and Stanley Roscnscheln for many helpful discun~ons leading to the writing of this paper. IMetarules were first utilized for natural-language research and are most extensively developed within the theory of Generalized Phrase Structure Grammar (GPSG) [Ga2dar end Pullum, 1082; Gawron et ~., 1982; Thompson. 1082 I. 2A metarule similar to our example was proposed by Gazdar, Pullum, and sag [10s2, p. oorl. The metarule notion is a seductive one, intuitively allowing generalizations about the grammar of a language to be stated concisely. However, unconstrained metarule formalisms may possess more expressive power than is apparently needed, and, moreover, they are not always compatationally "safe." For example, they may generate infinite sew of rules and describe arbitrary languages, lu this paper we examine both the formal and linguistic implications of various constraints on metagrammatical formalisms consisting of a combination of context-free phrase structure rules and metarules, which we will call metarule phrase.structure (MPS] grammars.The term "MPS grammar" is used in two ways in this paper. An MPS grammar can be viewed as a grammar in its own right that characterizes a language directly. Alternatively, it can be viewed as a metagrammar, that is, as a generator of a phrase structure obiect grammar, the characterized language being defined as the language of the object grammar. Uszkoreit and Peters [1982] have developed a formal definition of MPS grammars and have shown that an unconstrained MPS grammar can encode any recursively enumerable language. As long am the framework for grammatical description is not seen am part of a theory of natural language, this fact may not alt'ect the usefulness of MPS grammars am tools for purely descriptive linguistics research; however, it has direct and obvious impact on those doing research in a computational or theoretical linguistic paradigm. Clearly, some way of constraining the power of MPS grammars is necessary to enable their use for encoding grammars in a ¢omputationally feasible way. In the sections that follow, we consider several formal proposals for constraining their power and discuss some of their computational and linguistic ramifications.In our discussion of the computational ramifications of the proposed constraints, we will use the notion of weak-generative capacity as a barometer of the expressive power of a formalism. Other notions of expre~ivity are possible, although some of the traditional ones may not be applicable to MPS grammars. Strong*generative capacity, for instance, though well-defined, seems to be an inadequate notion for comparison of MPS grammars, since it would have to be extended to include information about rule derivations am well am tree derivations. Similarly, we do not mean to imply by our arguments that the class of natural languages corresponds to some class that ranks low in the Chomsky hierarchy merely because the higher classes are less constrained in weak-generative power. The appropriate characterization of possible natural languages may not coincide at all with the divisions in the Chomsky hierarchy. Nevertheless weakgenerative capacity--the weakest useful metric of capacity--will be the primary concern of this paper as a well-defined and relevant standard for measuring constraints. Peters and Ritchie [1973] have pointed out that contextsensitive grammars have no more than context-free power when their rules are viewed as node-admissibility conditions. This suggests that MPS grammars might be analogously constrained by regarding the metarules as something other than phrusestructure grammar generators. A brief examination of three alternative approaches indicates, however, that none of them clearly yields any useful constraints on weak-generative capacity. Two of the alternatives discussed below consider metarules to be part of the grammar itself, rather than as part of the metagramo mar. The third views them as a set of redundant generalizations about the grammar.~. a, e it appears unlikely that a reinterpretation of MPS grammars can be found that solves their complexity problem, formal constraints on the MPS formalism itself have to be explored if we want to salvage the basic concept of metarules. In the following examination of currently proposed constraints, the two criteria for evaluation are their effects on computational tractability and on the ezplanatory adcquaeltof the formalism.As an example of constraints that satisfy the criterion of computational tractability but not that of explanatory adequacy, we examine the issue of essential variables. These are variables in the metarule pattern that can match an arbitrary string of items in a phrase structure rule. Uszkoreit and Peters have shown that, contrary to an initial conjecture by Jcehi (see [Gazdar, 1982, fn. 28] ), allowing even one such variable per metarule extends the power of the formalism to recursive enumerability. Gazdar has recommended [1982, p.160 ] that the power of metarules be controlled by eliminating essential variables, exchanging them for abbreviatory variables that can stand only for strings in a finite and cztrinsieally determined range. This constraint yields a computationslly tractable system with only context-free power.Exchanging essential for abbreviatory variables is not, however, as attractive a prospect as it appears at first blush. Uszkoreit and Peters [1982[ show that by restricting MFS grammars to using abbreviatory variables only, some significant generalizations are lost. Consider the following metarule that is proposed and motivated in [Gazdar 1982 ] for endowing VSO languages with the category VP. The metarule generates fiat VSO sentence rules from VP rules.(2) VP-.V U~ S-.V NPUSince U is an abbreviatory variable, its range needs to be stated explicitly. Let us imagine 'h:,t the VSO language in question has the follo~ ;~ small set of VF rules:(3) w ,'~ VP --V NP vP-. V-~ VP -. V VP VP -. V NP V-PTherefore, the range of U has to be {e, NP, ~, ]77~, NP V'P}.3As statements about the object ~'~mmar, however, metxrules might play s role in language acquisition or in dia~hronie processes.If these VP rules are the only rules that satisfy the lefthand side of (2), then (2) generates exactly the same rules am it would if we declared U to be an essential variable--i.e., let its range be (Vr O VN) °. But now imagine that the language adopts a new subcategorizatiun frame for verbs, 4 e.g., a verb that takes an NP and an S am complements. VP rule (4) is added:(4) VP --I/" NP -S Metarule (2) predicts that VPs headed by this verb do not have a corresponding fiat V$O sentence rule. We will have to change the metarule by extending the range of U in order to retain the generalization originally intended by the metarule. Obviously, our metarule did not encode the right generalization (a simple intension-extensiun problem).This shortcoming nun also surface in cases where the input to a metarule is the output of another metaruh. It might be that metarule (2) not only applies to basic verb rules but also includes the output of, say, a passive rule. The range of the variable [.r would have to be extended to cover these tames too, and, moreover, might have to be altered if its feeding metarules change.Thus, if the restriction to abbreviatury variables is to have no effect on the weak-gensrative capacity of a grammar, the range assigned to each variable must include the range that would have actually instantiated the variable on an expansion of the MPS grammar in which the variable was treated as essential.The assignment of a range to the variable can only be done po,t /actum. This would be a satisfactory result, were it not for the fact that finding the necessary range of a variable in this way is an undecidable problem in general. Thus, to exchange essential for abbreviatory variables is to risk affecting the generative capacity of the grammar~with quite unintultive and unpredictable results. In short, the choice is among three options: to affect the language of the grammar in ways that are linguistically un-moti~at4ed and arbitrary, to solve an undecidable problem, or to discard the notion of exchanging essential for abbreviatory variables--in effect, a Hobsun's choice.An example of a constraint that satisfies the second criterion, that of explanatory adequacy, hut not the first, computational tractability, is the leziesl-head constraint of GPSG [. This constraint allows metarules to operate only on rules whose stipulated head is a lexical (preterminal) category. Since the Uszkoreit and Peters results are achieved even under this restriction to the formalism, the cow straint does not provide a solution to the problem of expressive power. Of course, this is no criticism of the proposal, since it was never intended as a formal restriction on the class of languages, but rather ~ a restriction un linguistically motivated grammars. Unfortunal,ely, the motivation behind even this use of the lexicalhead constraint may be lacking. One of the few analyses that relies on the lexical-head constraint is a recent GPSG analysis of coordination and extraction in English (Gazdar, 1981] . In this ease--indeed, in general-one could achieve the desired effect simply by specifying that the coefficient of the bar feature be lezical. It remains to be seen whether the constraint must be imposed for enough metarules so as to justify its incorporation as a general principle.Even with such motivation one might raise a question about the advisability of the lexical-head constraint on a metatheoretical level. The linguistic intuition behind the constraint is that the role of metarules is to "express generalizations about possibilities of subeategorizatiun" exclusively [Gaadar, Klein, Pullum, and Sag, 1982, p.391, e .g., to express the p~mive-active relation. This result is said to follow from principles of ~ syntax [Jackendoff, 1077] , in which just those categories that are subcategorized for are siblings of a lexieal head. However, in a language with freer word order than English, categories other than those subcategorized for will be siblings of lexieal heads; they would, thus, be affected by metarules even under the lexical-head constraint. This result will certainly follow from the liberation rule approach to free word order [Pullum, 1982] . The original linguistic generalization intended by the hxical-head constraint, therefore, will not hold cross-linguistically.Finally, there is the current proposal of the GPSG community for constraining the formal powers of metarules by allowing each metaruh to apply only once in a derivation of a rule. Originally dubbed the once.through hgpothe~is, this constraint is now incorporated" into GPSG under the name finite closure . Although linguistic evidence for the constraint has never been provided, the formal motivation is quite strong because, under this constraint, the metarule formalism would have only context-free power.Several linguistic constructions present problems with respect to the adequacy of the finite-closure hypothesis. For instance, the liberation rule technique for handling free-word-order languages {Pullum, 1982] would require ffi noun-phrase liberation rule to be applied twice in a derivation of a rule with sibling noun phrases that permute their subconstituents freely among one another. As a hypothetical example of this phenomenon, let us suppose that English allowed relative clauses to be extraposed in general from noun phrases, instead of allowing just one extraposifion. For instance, in this quasi-English, the sentence (5) Two children are chasing the dog who are small that is here.would he a grammatical paraphrase of (0) Two children who are small axe chasing the dog that is here.Let us suppose further that the analysis of this phenomenon involved liberation of the NP-S substructure of the noun phrases for incorporation into the main sentence. Then the noun-phrase liberation rule would apply once to liberate the subject noun phrase, once again to liberate the object noun phrase. That these are not idle concerns is demonstrated by the following sentence in the free-word-order Australian aboriginal language Warlpiri. s 4Note that it does not matter whether the grammar writer discovers an additional subcateKorization, or the language develops one diachronically; the same problem obtains. 5This example is t,.ken from [van Riemsdijk, 1981] .(7) Kutdu-jarra-rlu ks-pals maliki wita-jarra-rlu chiId-DUAL-ERG AUX:DUAL dog-ABS smalI-DUAL-ERG yalumpu wajilipi-nyi that-ABS chase=NONPAST Two 8mall children are cha,ing that dog.The Warlpiri example is analogous to the quasi-English example in that both sentences have two discontinuous NPs in the same distribution. Furthermore, the liberation rule approach has been proposed as a method of modeling the free word order of Waripiri. Thus, it appears that finite closure is not consistent with the liberation rule approach to free word order.Adverb distribution presents another problem for the hypothesis. In German, for example, and to a lesser extent in Engiish, an unbounded number of adverbs can be quite freely interspersed with the complements of a verb. The following German sentence is an extreme example of this phenomenon [Uszkoreit, 1982] . The sequence of its major constituents is given under (9). A metarule might therefore be proposed that inserts a single adverb in a verb-phrase rule. Repeated application of this rule (in contradiction to the finite-closure hypothesis) would achieve the desired effect. To maintain the finite-closure hypothesis, we could merely extend the notion of context-free rule to allow regular expressions on the right-hand side of a rule. The verb phrase rule would then be accurately, albeit clumsily, expressed as, say, VP -.* V NP ADVP* or VP -* V NP ADVP* PP ADVP* for ditransitives.Similar constructions in free-word-order languages do not permit such naive solutions. As an example, let us consider the Japanese causative. In this construction, the verb sutRx "-sase" signals the causativization of the verb, allowing an extra NP argument. The process is putatively unbounded (ignoring performance limitations). Furthermore, Japanese allows the NPs to order freely relative to one another (subject to considerations of ambiguity and focus), so that a fiat structure with some kind of extrinsic ordering is presumably preferable.One means of achieving a fiat structure with extrinsic ordering is by using the ID/LP formalism, a subformalism of GPSG that allows immediate dominance (ID) information to be specified separately from linear precedence (LP) notions. (Cf. context-free phrase structure grammar, which forces a strict oneto-one correlation between the two types of information.) ID information is specified by context-free style rules with unordered right-hand sides, notated, e.g., .4 ~ B, C, D. LP informa,Aon is specified as a partial order over the nonterminals in the ..orr-,m max, notated, e.g., B < C (read B precedes C). These two rules can be viewed as schematizing a set of three context-free rules, namely, A --B C D, A --B D C, and A - -D B C.Without a causativization metarule that can operate more than once, we might attempt to use the regular expression notation that solved the adverb problem. For example, we might postulate the ID rule VP -, NP*, V, sane* with the LP relation NP < V < sase, but no matching of NPs with sases is achieved. We might attempt to write a liberation rule that pulls NP.saee pairs from a nested structure into a flat one, but this would violate the finite-closure hypothesis (as well as Pullum's requirement precluding liberation through a recursive category). We could attempt to use even more of the power of regular-expression rules with ID/LP, i.e., VP -, {NP, 8a,e} °, V under the same LP relation. The formalism presupposed by this analysis, however, has greater than context-free power, ° so that this solution may not be desirable. Nevertheless, it should not be ruled out before the parsing properties of such a formalism are understood. T Gunji's analysis of Japanese, which attempts to solve such problems with the multiple application of a tlash introduction metarule [Gunji, 1980 l, again raises the problem of violating the 6nite-closure hypothesis (as well as being incompatible with the current version of GPSG which disallows multiple slashes). Finally, we could always move ca~ativization into the lexicon as a lexical rule. Such a move, though it does circumvent the difficulty in the syntax, merely serves to move it elsewhere without resolving the basic problem.Yet another alternative involves treating the right-hand ~ides of phrase structure rules as sets, rather than multisets as is implicit in the ID/LP format. Since the nonterminal vocabulary is finite, right-hand sides of ID rules must be subsets of a finite set and therefore finite sets themselves. This hypothesis is quite similar in effect to the finite-closure hypothesis, albeit even more limited, and thus inherits the same problems aa were discussed above.
null
null
null
Stucky [forthcoming] investigates the possibility of defining metarules as complex node-admissibility conditions, which she calls meta, node-admissibility conditions. Two computationally desirable results could ensue, were this reinterpretation possible. Because the metarules do not generate rules under the meta, node-admissibility interpretation, it follows that there will be neither a combinatorial explosion of rules nor any derivation resulting in an infinite set of rules (both of which are potential problems that could arise under the original generative interpretation).For this reinterpretation to have a computationally tractable implementation, however, two preconditions must be met. First, an independent mechanism must be provided that assig~ to any string a finite set of trees, including those admitted by the metarules together with the bmm rules. Second, a procedure must be defined that checks node admissibilities according to the base rules and metarules of the grammar--and that terminates.[t is this latter condition that we snspect will not be possible without constraining the weak-generative capacity of MPS grammars. Thus, this perspective does not seem to change the basic expressivity problems of the formalism by itself.A second alternative, proposed by Kay [1982] , is one in which metarules are viewed as chart-manipulating operators on a chart parser. Here too, the metarules are not part of a metagrammar that generates a context-free grammar; rather, they constitute a second kind of rule in the grammar. Just like the meta-node-admissibility interpretation, Kay's explicst, ion seems to retain the basic problem of expressive power, though Kay hints at a gain in efficiency if the metarules are compiled into a finite-state transducer.Finally, an alternative that does not integrate metarules into the object grammar but, on the other hand, does not assign them a role in generating an object grammar either, is to view them as redundancy statements describing the relationships that hold among rules in the full grammar. This interpretation eliminates the problem of generating infinite rule sets that gave rise to the Uszkoreit and Peters results. However, it is difficult to see how the solution supports a computationally useful notion of metarules, since it requires that all rules of the grammar be stated explicitly. Confining the role of metarules to that of stating redundancies prevent~ their productive application, so that the metarules serve no clear computational purpose for grammar implementation. 3We thus conclude that, in contrust to context-sensltive grammar, in which an alternative interpretation of the phruse structure rules makes a difference in weak-generative capacity, MPS grammars do not seem to benefit from the reinterpretations we have investigated.An obvious way to constrain MPS grammar, is to eliminate metarules entirely and replace them with other mechanisms. In fact, within the GPSG paradigm, several of the functions of metarules have been replaced by other metagrammatical devices. Other functions have not, as of the writing of this paper, though it i$ instructive ~.o co=ider ~.he c~es covered ~y this cia~s. In the discussion to follow we have isolated thxee of the primary functions of metarules. This is not intended az an exhaustive taxonomy, and certain metarules may manifest more than one of these functions.First, we consider generalizations over linear order. If metarules are metagrammatical statements about rules encoding linear order, they may relate rules that differ only in the linear order of categories. With the introduction of ID/LP format, however, the hypothesis i, that this latter metagrammatical device will suffice to account for the linear order among the categories within rules. For instance, the problematic adverb and causative metarnles could be replaced by extended contex.t-free rules with [D/LP, as was suggested in Section 3 above. Shieber [forthcoming[ has shown that a pure ID/LP formalism (without metarules, Kleene star, or the like) is no le~ computationally tractable than context-free grammars themselves. Although we do not yet know what the consequences of incorporating the extended context-free rules would be for computational complexity, ID/LP format can be used to replace certain word-ordervariation metarules.A second function of metarnles wa~ to relate sets of rules that differed only in the values of certain specifed features. It has been suggested [Gat~iar and Pullum 1982 ] that such features are distributed according to certain general principles. For instance, the slash-propagation metarule haz been replaced by the distribution of slash features in accord with such a principle.A third function of metarules under the original interpretation has not been relegated to other metagr~nmatical devices. \Ve have no single device to suggest, though we axe exploring alternative ways r,o account for the phenomena. Formally, this third class can be characterized as comprising those metacules that relate sets of rules in which the number of categories on the right-and left-hand sides of rules differ. It is this sort of metarule that is essential for the extension of GPSGs beyond context-free power in the Uszkoreit and Peters proofs {1982]. Simply requiring that such metarules be disallowed would not resolve the linguistic issues, however, since this constraint would inherit the problems connected with the regular expression and set notations discussed in Section 3 above. This third cl~s further breaks down into two cases: those that have different parent categories on the rightand left-hand sides of the metarule and those that have the same category on both sides. The ~rst c~e includes those liberation rules that figure in analyses of free-word-order phenomena, plus such other rules as the subject-auxiliary-inversion metarule in English. Uszkoreit [forthcoming] is exploring a method for isolating liberation rules in a separate metagrammaticul formalism. It also appears that the subject-auxiliary inversion may be analyzed by already existing principles governing the distribution of features. The second case (those in which the categories on the right-and left-hand sides are the same) includes such analyses as the passive in English. This instance, at least, might be replaced by a lexicai-redundancy rule. Thus, no uniform solution has yet been found for this third function of metarules.We conclude that it may be possible to replace MPS-style metagrammatical formalisms entirely without losing generaliza-tion~. '~Ve ~re consequently pursuing re~eaxcu tu ~u,o o~,,.The formal power of metaxule formalisms is clearly an important consideration for computational linguists. Uszkoreit and Pet.era [1982] have shown that the potential exists for defining metarule formalisms that are computationally "unsafe." However, these results do not sound a death knell for metarules. On the contrary, the safety of metarule formalisms is still an open question. We have merely shown that the constraints on metarules necessary to make them formally tractable will have to be based on empirical linguiaic evidence as well as solid formal research. The solutions to constraining metarules analyzed here seem to be either formally or linguistically inadequate. Further research is needed in the actual uses of metarules and in constructions that axe problematic for metarules so ~ to develop either linguistically motivated and computationally interesting constraints on the formalisms, or alternative formalisms that axe linguistically adequate but not heir to the problems of metaxules.
Main paper: introduction: The computational-linguistics community has recently shown interest in a variety of metagrammatical formalisms for encoding grammars of natural language. A common technique found in these formalisms involves the notion of a metarule, which, in its most common conception, is a device used to generate grammar rules from other given grammar rules. 1 A metarule is essentially a statement declaring that, if a grammar contains rules that match one specified pattern, it also contains rules that match some other specified pattern. For example, the following metarule states that, if there is a rule that expands a finite VP into a finite auxiliary and a nonfinite VP, there will also be a rule that expands the VP as before except for an additional adverb between the auxiliary and the nnnfinite VP. 2 The patterns may contain variables, in which case they characterize "families ~ of related rules rather than individual pairs. *This reeearch was supported by the National Science Foundation grant No. IST-8103550. The views and conclusions expressed in this document are those of the authors and should not be interpreted as represent,.tive of the views of the National Science Foundation or the United States government. We are indebted to Fernando Pereira, Stanley Peters, and Stanley Roscnscheln for many helpful discun~ons leading to the writing of this paper. IMetarules were first utilized for natural-language research and are most extensively developed within the theory of Generalized Phrase Structure Grammar (GPSG) [Ga2dar end Pullum, 1082; Gawron et ~., 1982; Thompson. 1082 I. 2A metarule similar to our example was proposed by Gazdar, Pullum, and sag [10s2, p. oorl. The metarule notion is a seductive one, intuitively allowing generalizations about the grammar of a language to be stated concisely. However, unconstrained metarule formalisms may possess more expressive power than is apparently needed, and, moreover, they are not always compatationally "safe." For example, they may generate infinite sew of rules and describe arbitrary languages, lu this paper we examine both the formal and linguistic implications of various constraints on metagrammatical formalisms consisting of a combination of context-free phrase structure rules and metarules, which we will call metarule phrase.structure (MPS] grammars.The term "MPS grammar" is used in two ways in this paper. An MPS grammar can be viewed as a grammar in its own right that characterizes a language directly. Alternatively, it can be viewed as a metagrammar, that is, as a generator of a phrase structure obiect grammar, the characterized language being defined as the language of the object grammar. Uszkoreit and Peters [1982] have developed a formal definition of MPS grammars and have shown that an unconstrained MPS grammar can encode any recursively enumerable language. As long am the framework for grammatical description is not seen am part of a theory of natural language, this fact may not alt'ect the usefulness of MPS grammars am tools for purely descriptive linguistics research; however, it has direct and obvious impact on those doing research in a computational or theoretical linguistic paradigm. Clearly, some way of constraining the power of MPS grammars is necessary to enable their use for encoding grammars in a ¢omputationally feasible way. In the sections that follow, we consider several formal proposals for constraining their power and discuss some of their computational and linguistic ramifications.In our discussion of the computational ramifications of the proposed constraints, we will use the notion of weak-generative capacity as a barometer of the expressive power of a formalism. Other notions of expre~ivity are possible, although some of the traditional ones may not be applicable to MPS grammars. Strong*generative capacity, for instance, though well-defined, seems to be an inadequate notion for comparison of MPS grammars, since it would have to be extended to include information about rule derivations am well am tree derivations. Similarly, we do not mean to imply by our arguments that the class of natural languages corresponds to some class that ranks low in the Chomsky hierarchy merely because the higher classes are less constrained in weak-generative power. The appropriate characterization of possible natural languages may not coincide at all with the divisions in the Chomsky hierarchy. Nevertheless weakgenerative capacity--the weakest useful metric of capacity--will be the primary concern of this paper as a well-defined and relevant standard for measuring constraints. Peters and Ritchie [1973] have pointed out that contextsensitive grammars have no more than context-free power when their rules are viewed as node-admissibility conditions. This suggests that MPS grammars might be analogously constrained by regarding the metarules as something other than phrusestructure grammar generators. A brief examination of three alternative approaches indicates, however, that none of them clearly yields any useful constraints on weak-generative capacity. Two of the alternatives discussed below consider metarules to be part of the grammar itself, rather than as part of the metagramo mar. The third views them as a set of redundant generalizations about the grammar. constraints by change of perspective: Stucky [forthcoming] investigates the possibility of defining metarules as complex node-admissibility conditions, which she calls meta, node-admissibility conditions. Two computationally desirable results could ensue, were this reinterpretation possible. Because the metarules do not generate rules under the meta, node-admissibility interpretation, it follows that there will be neither a combinatorial explosion of rules nor any derivation resulting in an infinite set of rules (both of which are potential problems that could arise under the original generative interpretation).For this reinterpretation to have a computationally tractable implementation, however, two preconditions must be met. First, an independent mechanism must be provided that assig~ to any string a finite set of trees, including those admitted by the metarules together with the bmm rules. Second, a procedure must be defined that checks node admissibilities according to the base rules and metarules of the grammar--and that terminates.[t is this latter condition that we snspect will not be possible without constraining the weak-generative capacity of MPS grammars. Thus, this perspective does not seem to change the basic expressivity problems of the formalism by itself.A second alternative, proposed by Kay [1982] , is one in which metarules are viewed as chart-manipulating operators on a chart parser. Here too, the metarules are not part of a metagrammar that generates a context-free grammar; rather, they constitute a second kind of rule in the grammar. Just like the meta-node-admissibility interpretation, Kay's explicst, ion seems to retain the basic problem of expressive power, though Kay hints at a gain in efficiency if the metarules are compiled into a finite-state transducer.Finally, an alternative that does not integrate metarules into the object grammar but, on the other hand, does not assign them a role in generating an object grammar either, is to view them as redundancy statements describing the relationships that hold among rules in the full grammar. This interpretation eliminates the problem of generating infinite rule sets that gave rise to the Uszkoreit and Peters results. However, it is difficult to see how the solution supports a computationally useful notion of metarules, since it requires that all rules of the grammar be stated explicitly. Confining the role of metarules to that of stating redundancies prevent~ their productive application, so that the metarules serve no clear computational purpose for grammar implementation. 3We thus conclude that, in contrust to context-sensltive grammar, in which an alternative interpretation of the phruse structure rules makes a difference in weak-generative capacity, MPS grammars do not seem to benefit from the reinterpretations we have investigated. for:hal constraints: ~. a, e it appears unlikely that a reinterpretation of MPS grammars can be found that solves their complexity problem, formal constraints on the MPS formalism itself have to be explored if we want to salvage the basic concept of metarules. In the following examination of currently proposed constraints, the two criteria for evaluation are their effects on computational tractability and on the ezplanatory adcquaeltof the formalism.As an example of constraints that satisfy the criterion of computational tractability but not that of explanatory adequacy, we examine the issue of essential variables. These are variables in the metarule pattern that can match an arbitrary string of items in a phrase structure rule. Uszkoreit and Peters have shown that, contrary to an initial conjecture by Jcehi (see [Gazdar, 1982, fn. 28] ), allowing even one such variable per metarule extends the power of the formalism to recursive enumerability. Gazdar has recommended [1982, p.160 ] that the power of metarules be controlled by eliminating essential variables, exchanging them for abbreviatory variables that can stand only for strings in a finite and cztrinsieally determined range. This constraint yields a computationslly tractable system with only context-free power.Exchanging essential for abbreviatory variables is not, however, as attractive a prospect as it appears at first blush. Uszkoreit and Peters [1982[ show that by restricting MFS grammars to using abbreviatory variables only, some significant generalizations are lost. Consider the following metarule that is proposed and motivated in [Gazdar 1982 ] for endowing VSO languages with the category VP. The metarule generates fiat VSO sentence rules from VP rules.(2) VP-.V U~ S-.V NPUSince U is an abbreviatory variable, its range needs to be stated explicitly. Let us imagine 'h:,t the VSO language in question has the follo~ ;~ small set of VF rules:(3) w ,'~ VP --V NP vP-. V-~ VP -. V VP VP -. V NP V-PTherefore, the range of U has to be {e, NP, ~, ]77~, NP V'P}.3As statements about the object ~'~mmar, however, metxrules might play s role in language acquisition or in dia~hronie processes.If these VP rules are the only rules that satisfy the lefthand side of (2), then (2) generates exactly the same rules am it would if we declared U to be an essential variable--i.e., let its range be (Vr O VN) °. But now imagine that the language adopts a new subcategorizatiun frame for verbs, 4 e.g., a verb that takes an NP and an S am complements. VP rule (4) is added:(4) VP --I/" NP -S Metarule (2) predicts that VPs headed by this verb do not have a corresponding fiat V$O sentence rule. We will have to change the metarule by extending the range of U in order to retain the generalization originally intended by the metarule. Obviously, our metarule did not encode the right generalization (a simple intension-extensiun problem).This shortcoming nun also surface in cases where the input to a metarule is the output of another metaruh. It might be that metarule (2) not only applies to basic verb rules but also includes the output of, say, a passive rule. The range of the variable [.r would have to be extended to cover these tames too, and, moreover, might have to be altered if its feeding metarules change.Thus, if the restriction to abbreviatury variables is to have no effect on the weak-gensrative capacity of a grammar, the range assigned to each variable must include the range that would have actually instantiated the variable on an expansion of the MPS grammar in which the variable was treated as essential.The assignment of a range to the variable can only be done po,t /actum. This would be a satisfactory result, were it not for the fact that finding the necessary range of a variable in this way is an undecidable problem in general. Thus, to exchange essential for abbreviatory variables is to risk affecting the generative capacity of the grammar~with quite unintultive and unpredictable results. In short, the choice is among three options: to affect the language of the grammar in ways that are linguistically un-moti~at4ed and arbitrary, to solve an undecidable problem, or to discard the notion of exchanging essential for abbreviatory variables--in effect, a Hobsun's choice.An example of a constraint that satisfies the second criterion, that of explanatory adequacy, hut not the first, computational tractability, is the leziesl-head constraint of GPSG [. This constraint allows metarules to operate only on rules whose stipulated head is a lexical (preterminal) category. Since the Uszkoreit and Peters results are achieved even under this restriction to the formalism, the cow straint does not provide a solution to the problem of expressive power. Of course, this is no criticism of the proposal, since it was never intended as a formal restriction on the class of languages, but rather ~ a restriction un linguistically motivated grammars. Unfortunal,ely, the motivation behind even this use of the lexicalhead constraint may be lacking. One of the few analyses that relies on the lexical-head constraint is a recent GPSG analysis of coordination and extraction in English (Gazdar, 1981] . In this ease--indeed, in general-one could achieve the desired effect simply by specifying that the coefficient of the bar feature be lezical. It remains to be seen whether the constraint must be imposed for enough metarules so as to justify its incorporation as a general principle.Even with such motivation one might raise a question about the advisability of the lexical-head constraint on a metatheoretical level. The linguistic intuition behind the constraint is that the role of metarules is to "express generalizations about possibilities of subeategorizatiun" exclusively [Gaadar, Klein, Pullum, and Sag, 1982, p.391, e .g., to express the p~mive-active relation. This result is said to follow from principles of ~ syntax [Jackendoff, 1077] , in which just those categories that are subcategorized for are siblings of a lexieal head. However, in a language with freer word order than English, categories other than those subcategorized for will be siblings of lexieal heads; they would, thus, be affected by metarules even under the lexical-head constraint. This result will certainly follow from the liberation rule approach to free word order [Pullum, 1982] . The original linguistic generalization intended by the hxical-head constraint, therefore, will not hold cross-linguistically.Finally, there is the current proposal of the GPSG community for constraining the formal powers of metarules by allowing each metaruh to apply only once in a derivation of a rule. Originally dubbed the once.through hgpothe~is, this constraint is now incorporated" into GPSG under the name finite closure . Although linguistic evidence for the constraint has never been provided, the formal motivation is quite strong because, under this constraint, the metarule formalism would have only context-free power.Several linguistic constructions present problems with respect to the adequacy of the finite-closure hypothesis. For instance, the liberation rule technique for handling free-word-order languages {Pullum, 1982] would require ffi noun-phrase liberation rule to be applied twice in a derivation of a rule with sibling noun phrases that permute their subconstituents freely among one another. As a hypothetical example of this phenomenon, let us suppose that English allowed relative clauses to be extraposed in general from noun phrases, instead of allowing just one extraposifion. For instance, in this quasi-English, the sentence (5) Two children are chasing the dog who are small that is here.would he a grammatical paraphrase of (0) Two children who are small axe chasing the dog that is here.Let us suppose further that the analysis of this phenomenon involved liberation of the NP-S substructure of the noun phrases for incorporation into the main sentence. Then the noun-phrase liberation rule would apply once to liberate the subject noun phrase, once again to liberate the object noun phrase. That these are not idle concerns is demonstrated by the following sentence in the free-word-order Australian aboriginal language Warlpiri. s 4Note that it does not matter whether the grammar writer discovers an additional subcateKorization, or the language develops one diachronically; the same problem obtains. 5This example is t,.ken from [van Riemsdijk, 1981] .(7) Kutdu-jarra-rlu ks-pals maliki wita-jarra-rlu chiId-DUAL-ERG AUX:DUAL dog-ABS smalI-DUAL-ERG yalumpu wajilipi-nyi that-ABS chase=NONPAST Two 8mall children are cha,ing that dog.The Warlpiri example is analogous to the quasi-English example in that both sentences have two discontinuous NPs in the same distribution. Furthermore, the liberation rule approach has been proposed as a method of modeling the free word order of Waripiri. Thus, it appears that finite closure is not consistent with the liberation rule approach to free word order.Adverb distribution presents another problem for the hypothesis. In German, for example, and to a lesser extent in Engiish, an unbounded number of adverbs can be quite freely interspersed with the complements of a verb. The following German sentence is an extreme example of this phenomenon [Uszkoreit, 1982] . The sequence of its major constituents is given under (9). A metarule might therefore be proposed that inserts a single adverb in a verb-phrase rule. Repeated application of this rule (in contradiction to the finite-closure hypothesis) would achieve the desired effect. To maintain the finite-closure hypothesis, we could merely extend the notion of context-free rule to allow regular expressions on the right-hand side of a rule. The verb phrase rule would then be accurately, albeit clumsily, expressed as, say, VP -.* V NP ADVP* or VP -* V NP ADVP* PP ADVP* for ditransitives.Similar constructions in free-word-order languages do not permit such naive solutions. As an example, let us consider the Japanese causative. In this construction, the verb sutRx "-sase" signals the causativization of the verb, allowing an extra NP argument. The process is putatively unbounded (ignoring performance limitations). Furthermore, Japanese allows the NPs to order freely relative to one another (subject to considerations of ambiguity and focus), so that a fiat structure with some kind of extrinsic ordering is presumably preferable.One means of achieving a fiat structure with extrinsic ordering is by using the ID/LP formalism, a subformalism of GPSG that allows immediate dominance (ID) information to be specified separately from linear precedence (LP) notions. (Cf. context-free phrase structure grammar, which forces a strict oneto-one correlation between the two types of information.) ID information is specified by context-free style rules with unordered right-hand sides, notated, e.g., .4 ~ B, C, D. LP informa,Aon is specified as a partial order over the nonterminals in the ..orr-,m max, notated, e.g., B < C (read B precedes C). These two rules can be viewed as schematizing a set of three context-free rules, namely, A --B C D, A --B D C, and A - -D B C.Without a causativization metarule that can operate more than once, we might attempt to use the regular expression notation that solved the adverb problem. For example, we might postulate the ID rule VP -, NP*, V, sane* with the LP relation NP < V < sase, but no matching of NPs with sases is achieved. We might attempt to write a liberation rule that pulls NP.saee pairs from a nested structure into a flat one, but this would violate the finite-closure hypothesis (as well as Pullum's requirement precluding liberation through a recursive category). We could attempt to use even more of the power of regular-expression rules with ID/LP, i.e., VP -, {NP, 8a,e} °, V under the same LP relation. The formalism presupposed by this analysis, however, has greater than context-free power, ° so that this solution may not be desirable. Nevertheless, it should not be ruled out before the parsing properties of such a formalism are understood. T Gunji's analysis of Japanese, which attempts to solve such problems with the multiple application of a tlash introduction metarule [Gunji, 1980 l, again raises the problem of violating the 6nite-closure hypothesis (as well as being incompatible with the current version of GPSG which disallows multiple slashes). Finally, we could always move ca~ativization into the lexicon as a lexical rule. Such a move, though it does circumvent the difficulty in the syntax, merely serves to move it elsewhere without resolving the basic problem.Yet another alternative involves treating the right-hand ~ides of phrase structure rules as sets, rather than multisets as is implicit in the ID/LP format. Since the nonterminal vocabulary is finite, right-hand sides of ID rules must be subsets of a finite set and therefore finite sets themselves. This hypothesis is quite similar in effect to the finite-closure hypothesis, albeit even more limited, and thus inherits the same problems aa were discussed above. the ultimate solution: An obvious way to constrain MPS grammar, is to eliminate metarules entirely and replace them with other mechanisms. In fact, within the GPSG paradigm, several of the functions of metarules have been replaced by other metagrammatical devices. Other functions have not, as of the writing of this paper, though it i$ instructive ~.o co=ider ~.he c~es covered ~y this cia~s. In the discussion to follow we have isolated thxee of the primary functions of metarules. This is not intended az an exhaustive taxonomy, and certain metarules may manifest more than one of these functions.First, we consider generalizations over linear order. If metarules are metagrammatical statements about rules encoding linear order, they may relate rules that differ only in the linear order of categories. With the introduction of ID/LP format, however, the hypothesis i, that this latter metagrammatical device will suffice to account for the linear order among the categories within rules. For instance, the problematic adverb and causative metarnles could be replaced by extended contex.t-free rules with [D/LP, as was suggested in Section 3 above. Shieber [forthcoming[ has shown that a pure ID/LP formalism (without metarules, Kleene star, or the like) is no le~ computationally tractable than context-free grammars themselves. Although we do not yet know what the consequences of incorporating the extended context-free rules would be for computational complexity, ID/LP format can be used to replace certain word-ordervariation metarules.A second function of metarnles wa~ to relate sets of rules that differed only in the values of certain specifed features. It has been suggested [Gat~iar and Pullum 1982 ] that such features are distributed according to certain general principles. For instance, the slash-propagation metarule haz been replaced by the distribution of slash features in accord with such a principle.A third function of metarules under the original interpretation has not been relegated to other metagr~nmatical devices. \Ve have no single device to suggest, though we axe exploring alternative ways r,o account for the phenomena. Formally, this third class can be characterized as comprising those metacules that relate sets of rules in which the number of categories on the right-and left-hand sides of rules differ. It is this sort of metarule that is essential for the extension of GPSGs beyond context-free power in the Uszkoreit and Peters proofs {1982]. Simply requiring that such metarules be disallowed would not resolve the linguistic issues, however, since this constraint would inherit the problems connected with the regular expression and set notations discussed in Section 3 above. This third cl~s further breaks down into two cases: those that have different parent categories on the rightand left-hand sides of the metarule and those that have the same category on both sides. The ~rst c~e includes those liberation rules that figure in analyses of free-word-order phenomena, plus such other rules as the subject-auxiliary-inversion metarule in English. Uszkoreit [forthcoming] is exploring a method for isolating liberation rules in a separate metagrammaticul formalism. It also appears that the subject-auxiliary inversion may be analyzed by already existing principles governing the distribution of features. The second case (those in which the categories on the right-and left-hand sides are the same) includes such analyses as the passive in English. This instance, at least, might be replaced by a lexicai-redundancy rule. Thus, no uniform solution has yet been found for this third function of metarules.We conclude that it may be possible to replace MPS-style metagrammatical formalisms entirely without losing generaliza-tion~. '~Ve ~re consequently pursuing re~eaxcu tu ~u,o o~,,. conclusion: The formal power of metaxule formalisms is clearly an important consideration for computational linguists. Uszkoreit and Pet.era [1982] have shown that the potential exists for defining metarule formalisms that are computationally "unsafe." However, these results do not sound a death knell for metarules. On the contrary, the safety of metarule formalisms is still an open question. We have merely shown that the constraints on metarules necessary to make them formally tractable will have to be based on empirical linguiaic evidence as well as solid formal research. The solutions to constraining metarules analyzed here seem to be either formally or linguistically inadequate. Further research is needed in the actual uses of metarules and in constructions that axe problematic for metarules so ~ to develop either linguistically motivated and computationally interesting constraints on the formalisms, or alternative formalisms that axe linguistically adequate but not heir to the problems of metaxules. Appendix:
null
null
null
null
{ "paperhash": [ "gawron|processing_english_with_a_generalized_phrase_structure_grammar", "peters|context-sensitive_immediate_constituent_analysis—context-free_languages_revisited" ], "title": [ "Processing English With a Generalized Phrase Structure Grammar", "Context-sensitive immediate constituent analysis—context-free languages revisited" ], "abstract": [ "This paper describes a natural language processing system implemented at Hewlett-Packard's Computer Research Center. The system's main components are: a Generalized Phrase Structure Grammar (GPSG); a top-down parser; a logic transducer that outputs a first-order logical representation; and a \"disambiguator\" that uses sortal information to convert \"normal-form\" first-order logical expressions into the query language for HIRE, a relational database hosted in the SPHERE system. We argue that theoretical developments in GPSG syntax and in Montague semantics have specific advantages to bring to this domain of computational linguistics. The syntax and semantics of the system are totally domain-independent, and thus, in principle, highly portable. We discuss the prospects for extending domain-independence to the lexical semantics as well, and thus to the logical semantic representations.", "The ability of context-sensitive grammars to generate non-context-free languages is well-known. However, phrase structure rules are often used in both natural and artificial languages, not to generate sentences, but rather to analyze or parse given putative sentences. Linguistic arguments have been advanced that this is the more fruitful use of context-sensitive rules for natural languages, and that, further, it is the purported phrase-structure tree which is presented and analyzed, rather than merely the terminal string itself. In this paper, a language is shown to be context-free if and only if there is a finite set of context-sensitive rules which parse this language; i.e., if and only if there is a collection of trees whose terminal strings are this language and a finite set of context-sensitive rules which analyze exactly these trees." ], "authors": [ { "name": [ "J. Gawron", "Jonathan J. King", "J. Lamping", "E. Loebner", "E. Anne Paulson", "G. Pullum", "Ivan Sag", "T. Wasow" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "P. S. Peters", "Robert W. Ritchie" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "14372141", "3187585" ], "intents": [ [], [] ], "isInfluential": [ false, false ] }
Problem: The paper discusses the issue of unconstrained metarule formalisms in MPS grammars being computationally "unsafe" and not always compatible with computational tractability and explanatory adequacy. Solution: The hypothesis of the paper is to evaluate and propose constraints on metarule phrase-structure (MPS) grammars to address the issues of computational tractability and explanatory adequacy, aiming to enable the use of MPS grammars for encoding grammars in a computationally feasible manner.
500
0.028
null
null
null
null
null
null
null
null
39a653c2d1ee8af89de146923f97dda3b935c0db
5991639
null
Syntactic Constraints and Efficient Parsability
A central goal of linguistic theory is to explain why natural languages are the way they are. It has often been supposed that com0utational considerations ought to play a role in this characterization, but rigorous arguments along these lines have been difficult to come by. In this paper we show how a key "axiom" of certain theories of grammar, Subjacency, can be explained by appealing to general restrictions on on-line parsing plus natural constraints on the rule-writing vocabulary of grammars. The explanation avoids the problems with Marcus' [1980] attempt to account for the same constraint. The argument is robust with respect to machine implementauon, and thus avoids the problems that often arise wilen making detailed claims about parsing efficiency. It has the added virtue of unifying in the functional domain of parsing certain grammatically disparate phenomena, as well as making a strong claim about the way in which the grammar is actually embedded into an on-line sentence processor.
{ "name": [ "Berwick, Robert C. and", "Weinberg, Amy S." ], "affiliation": [ null, null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
11
6
null
In its short history, computational linguistics has bccn driven by two distinct but interrelated goals. On the one hand, it has aimed at computational explanations of distinctively human linguistic behavior --that is, accounts of why natural languages are the way they are viewed from the perspective of computation. On the other hand, it has accumulated a stock of engineenng methods for building machines to deal with natural (and artificial) languages. Sometimes a single body of research has combined both goals. This was true of the work of Marcus [1980] . for example. But all too often the goals have remained opposed --even to the extent that current transformational theory has been disparaged as hopelessly "intractable" and no help at all in constructing working parsers. This paper shows that modern transformational grammar (the "Government-Binding" or "GB" theory as described in Chomsky [1981] ) can contribute to both aims of computational linguistics. We show that by combining simple assumptions about efficient parsability along with some assumpti(ms about just how grammatical theory is to be "embedded" in a model of language processing, one can actually explain some key constraints of natural languages, such as Suhjacency.(The a)gumcnt is differmlt frt)m that used in Marcus 119801.) In fact, almost the entire pattern of cunstraints taken as "axioms" by the GB thct)ry can be accutmtcd tbr. Second, contrary to what has sometimes been supposed, by exph)iting these constraints wc can ~how that a Gll-based theory is particularly compatil)le v~idl efficient parsing designs, in particdlar, with extended I I~,(k,t) parsers (uf the sort described by Marcus [1980 D. Wc can extcnd thc I,R(k.t) design to accommodate such phenomena as antecedent-PRO and pronominal binding. Jightward movement, gappiug, aml VP tlcletion.Let us consider how to explain locality constraints in natural languages. First of all, what exactly do we mean by a "locality constraint"? "]'he paradigm case is that of Subjacency: the distance between a displaced constituent and its "underlying" canonical argument position cannot be too large, where the distance is gauged (in English) in terms of the numher of the number of S(entence) or NP phrase boundaries. For example, in sentence (la) below, John (the so-called "antecedent") is just one S-boundary away from its presumably "underlying" argument position (denoted "x", the "trace")) as the Subject of the embedded clause, and the sentence is fine:(la) John seems [S x to like ice cream].However, all we have to do ts to make the link between John and x extend over two S's, and the sentence is ill-formed: (lb) John seems [S it is certain [S x to like ice cream This restriction entails a "successive cyclic" analysis of transformational rules (see Chomsky [1973] ). In order to derive a sentence like (lc) below without violating the Subjacency condition, we must move the NP from its canonical argument position through the empty Subject position in the next higher S and then to its surface slot:(lc) John seems tel to be certain x to get the ice cream.Since the intermediate subject position is filled in (lb) there is no licit derivation for this sentence.More precisely, we can state the Subjacency constraint as follows:No rule of grammar can involve X and Y in a configuration like the following,[ ...x...[,, ...[/r..Y...]...l ...X...]where a and # are bounding nodes (in l.'nglish, S or NP phrases). "Why should natural languages hc dcsigned Lhis way and not some other way? Why, that is, should a constraint like Subjaccncy exist at all? Our general result is that under a certain set of assumptions about grammars and their relationship to human sentence processing one can actually expect the following pattern of syntactic igcality constraints:(l) The antecedent-trace relationship must obey Subjaccncy, but other "binding" realtionships (e.g., NP--Pro) need not obey Subjaccncy.(2) Gapping constructitms must be subject to a bounding condition resembling Subjacency. but VP deletion nced not be.(3) Rightward movemcnt must be stricdy bounded.To the extent that this predicted pattern of constraints is actually observed --as it is in English and other languages --we obtain a genuine functional explanation of these constraints and support for the assumptions themselves. The argument is different from Man:us' because it accounts for syntactic locality constraints (like Subjaceney) ,as the joint effect of a particular theory of grammar, a theory of how that grammar is used in parsing, a criterion for efficient parsability. and a theory of of how the parser is builL In contrast, Marcus attempted to argue that Subjaceney could be derived from just the (independently justified) operating principles of a particular kind of parser.The assumptions we make are the following:(1) The grammar includes a level of annotated surface structure indicating how constituents have been displaced from their canonical predicate argument positions. Further, sentence analysis is divided into two stages, along the lines indicated by tile theory of Government and Binding: the first stage is a purely syntactic analysis that rebuilds annotated surface structure; the second stage carries out the interpretation of variables, binds them to operators, all making use of the "referential indices" of NPs.(2) To be "visible" at a stage of analysis a linguistic representation must be written in the vocabulary of that level. For example, to be affected by syntactic operations, a representation must be expressed in a syntactic vocabulary (in the usual sense); to be interpreted by operations at the second stage, the NPs in a representation must possess referential indices.(This assumption is not needed to derive the Subjaccncy constraint, but may be used to account for another "axiom" of current grammatical theory, the so-called "constituent command" constraint on antecedcnLs and the variables that they hind.) This "visibility" assumption is a rather natural one.(3) The rule-writing vocabulary of the grammar cannot make use of arithmetic predicates such as "one", "two" or "three". but only such predicates as "adjacent". Further, quzmtificational statements are not allowed m rt.les. These two assumptions are also rather standard. It has often been noted that grammars "do not count" --that grammatical predicates are structurally based. There is no rule of grammar that takes the just the fourth constituent of a sentence and moves it, for example. In contrast, many different kinds of rules of grammar make reference to adjacent constituents. (This is a feature found in morphological, phonological, and syntactic rules.) (4) Parsing is no....! done via a method that carries along (a representation) of all possible derivations in parallel.In particular, an Earley-type algorithm is ruled out. To the extent that multiple options about derivations are not pursued, the parse is "deterministic."(5) The left-context of the parse (as defined in Aho and Ullman [19721) is literally represented, rather than generatively represented (as, e.g., a regular set). In particular, just the symbols used by the grammar (S, NP. VP...) are part of the left-context vocabulary, and not "complex" symbols serving as proxies for the set of lefl.-context strings. 1 In effect, we make the (quite strong) assumption that the sentence processor adopts a direct, transparent embedding of the grammar.Other theories or parsing methods do not meet these constraints and fail to explain the existence of locality constraints with respect to thts particular set of assumpuons. 2 For example, as we show, there is no reason to expect a constraint like Subjacency in the Generalized Phrase Structure Grammars/GPSGsl of G,zdar 119811, because there is no inherent barrier to eastly processing a sentence where an antecedent and a trace are !.mboundedly far t'rt~m each other. Similarly if a parsing method like Earlcy's algorithm were actually used by people, than Sub]acency remains a my:;tcry on the functional grounds of efficient parsability. (It could still be explained on other functional grounds, e.g., that oflearnability.)To begin the actual argument then, assume that on-line sentence processing is done by something like a deterministic parser) Sentences like (2) cause trouble for such a parser:(2 2. Plainly. one is free to imagine some other set of assumptions that would do the job.3. If one a.ssumcs a backtracking parser, then the argument can also be made to go through, but only by a.,,,,~ummg that backtracking Ks vcr/co~tlS, Since this son of parser clearly ,,~ab:~umes the IR(kPt,',pe machines under t/le right co,mrual of 'cost". we make the stronger assumption of I R(k)-ncss.The problem is that on recognizing the verb eat the parser must decide whether to expand the parse with a trace (the transitive reading) or with no postverbal element (.the intransitive reading). The ambiguity cannot be locally resolved since eat takes both readings. It can only be resolved by checking to see whether there is an actual antecedent. Further, observe that this is indeed a parsing decision: the machine must make some decision about how to tu build a portion of the parse tree. Finally, given non-parallelism, the parser is not allowed to pursue both paths at once: it must decide now how to build the parse tree (by inserting an empty NP trace or not).Therefore, assuming that the correct decision is to be made on-line (or that retractions of incorrect decisions are costly) there must be an actual parsing rule that expands a category as transitive iff there is an immediate postverbal NP in the string (no movement) or if an actual antecedent is present. However, the phonologically overt antecedent can be unboundedly far away from the gap. Therefore, it would seem that the relevant parsing rule would have to refer to a potentially unbounded left context. Such a rule cannot be stated in the finite control table of an I,R(k) parser. Theretbre we must find some finite way of expressing the domain over which the antecedent must be searched.There are two ways of accomplishing this. First, one could express all possible left-contexts as somc regular set and then carry this representation along in the finite control table of the I,R(k) machine. This is always pu,,;sible m the case of a contcxt-fiee grammar, and m fact is die "standard" approach. 4 However, m the case of (e.g.) ,,h moven!enk this demands a generative encoding of the associated finite state automaton, via the use of complex symbols like "S/wh" (denoting the "state" that a tvtt has been encountered) and rules to pass king this nun-literal representation of the state of the parse. Illis approach works, since wc can pass akmg this state encoding through the VP (via the complex non-terminal symbol VP/wh) and finally into the embedded S. This complex non-terminal is then used to trigger an expansion of eat into its transitive form. Ill fact, this is precisely the solution method advocated by Gazdar. We ~ce then that if one adopts a non-terminal encoding scheme there should he no p,oblem in parsing any single long-distance gap-filler relationship. That is, there is no need for a constraint like Subjacency. s Second, the problem of unbounded left-context is directly avoided if the search space is limited to some literally finite left context. But this is just what the Sttbjacency c(mstraint does: it limits where an antecedent NP could be to an immediately adjacent S or S. This constraint has a StlllpJe intcrprctatum m an actual parser (like that built hy Murcus [19};0 D. l'he IF-THEN pattern-action rules that make up the Marcus parser's ~anite control "transi:ion table" must be finite in order to he stored ioside a machine. The rule actions themselves are literally finite. If the role patterns must be /herally stored (e.g., the pattern [S [S"[S must be stored as an actual arbitrarily long string ors nodes, rather than as the regular set S+), then these patterns must be literally finite. That is, parsing patterns must refer to literally hounded right and left context (in terms of phrasal nodes). 6 Note Further that this constraint depends on the sheer represcntability of the parser's rule system in a finite machine, rather than on any details of implementation. Therefore it will hold invariantly with respect to rnactfine design --no matter kind of machine we build, if" we assume a literal representation of left-contexts, then some kind t)f finiteness constraint is required. The robustness of this result contrasts with the usual problems in applying "efficiency" results to explain grm'~T""'!cal constraints. These often fail because it is difficult to consider all possible implcmentauons simultaneously. However, if the argument is invariant with respect to machine desing, this problem is avoided.Given literal left-contexts and no (or costly) backtracking, the argument so far motivates some bounding condition for ambiguous sentences like these. However, to get the lull range of cases these functional facts must interact with properties of the rule writing system as defined by the grammar. We will derive the litct that the Imunding condition must be ~acency (as opposed to tri-or quad-jaccncy) by appeal to the lhct that grammatical c~m~tramts and rules arc ~tated in a vocabtdary which is non-c'vunmtg. ,',rithmetic predicates are forbidden. But this means that since only the prediu~lte "ad].cent" is permitted, any literal I)ouuding rc,~trict]oi] must be c.xprc,~)cd m tcrlllS of adjacent domains: t~e~;ce Subjaccncy. INert that ",djacent" is also an arithmetic predicate.) l:urthcr. Subjaccncy mu,,t appiy ~.o ,ill traces (not ju',t traces of,mlb=guously traw~itive/imransi[ive vcrb,o in:cause a restriction to just the ambiguous cases would low)ire using cxistentml quantilicati.n. Ouantificatiomd predicates are barred in the rule writing vocabulary of natural grammars. 7Next we extend the approach to NP movement and Gapping. Gapping is particularly interesting because it is difficult ~o explain why this construction (tmlike other deletiou rules) is bounded. That is, why is (3) but not (4) grammatical:(3) John will hit Frank and Bill will [ely P George. *(4)John will hit Frank and I don't believe Bill will [elvpGeorge.The problem with gapping constructions is that the attachment of phonologically identical complements is governed by the verb that the complement follows. Extraction tests show that in {5) the pilrase u/?er M'ao' attaches to V" whde in (6) it attaches to V" (See Hornstem and Wemberg []981] for details.} (5) John will mn aftcr Mary. (6) John will arrivc after Mary.In gapping structures, however, the verb of the gapped constituent ,s not present in the string. Therefore. correct ,lltachrnent o( the complement can only be guaranteed by accessing the antecedent in the previous clause. If this is true however, then the boundlng argument for Suhjacency applies to this ease as well: given deterministic parsing of gapping done correctly, and a literal representation of left-context, then gapping must be comext-bounded. Note that this is a particularly 7 Of course, there zs a anolhcr natural predic.atc Ihat would produce a finite bound on rule context: i[ ~]) alld Irate hod I. bc in tile .ame S donlalll Prc~umahb', lhls is also an Optlllt3 ~l;iI could gel reah,ed in qOII|C n.'Ittlral l~rJoln'iai~: ll'ic resuhing languages would no( have ov,,:rt nlo~.eIIICill OUlside o[ an S. %o(e lllal Lhc naltllal plcdJc;des simply give the ranta¢ of po~edble ndiulal granmlars. ]lot those actually rour~d.The elimination of quanllfil',.llion predic~les is supportable on grounds o(acquisltton.Following the approactl of DcRemer []969], one budds a finHe stale automaton Lhat reco~nl/es exactly Ihe set of i¢[t-(OIIlext strings that cain arise during the course of a right-most derivation, the so-Gilled ch,melert.sllcf'.nife s/ale ClUlOmC~lott. 5 l'laml} the same Imlds for a "hold cell" apploaeh [o compulm 8 filler-gap relallonshipi 6. Actually Uteri. lhJ8 k;nd or device lall!; lllto lJae (~itegoly of bounded contc;~t parsing. a.' defiued b~. I ]oyd f19(.)4].F;hlce ~ ~s ungovcNicd fff a ~ovct'llcd t:~ F;L[:~c, and a go~c,' m~J is a bounded predicate, i hcmg Lcstrictcd Io mu~',dy a ~in~i¢ lllaX1111;il Drojcctlon (at worst al| S).
null
null
null
null
Main paper: i introduction: In its short history, computational linguistics has bccn driven by two distinct but interrelated goals. On the one hand, it has aimed at computational explanations of distinctively human linguistic behavior --that is, accounts of why natural languages are the way they are viewed from the perspective of computation. On the other hand, it has accumulated a stock of engineenng methods for building machines to deal with natural (and artificial) languages. Sometimes a single body of research has combined both goals. This was true of the work of Marcus [1980] . for example. But all too often the goals have remained opposed --even to the extent that current transformational theory has been disparaged as hopelessly "intractable" and no help at all in constructing working parsers. This paper shows that modern transformational grammar (the "Government-Binding" or "GB" theory as described in Chomsky [1981] ) can contribute to both aims of computational linguistics. We show that by combining simple assumptions about efficient parsability along with some assumpti(ms about just how grammatical theory is to be "embedded" in a model of language processing, one can actually explain some key constraints of natural languages, such as Suhjacency.(The a)gumcnt is differmlt frt)m that used in Marcus 119801.) In fact, almost the entire pattern of cunstraints taken as "axioms" by the GB thct)ry can be accutmtcd tbr. Second, contrary to what has sometimes been supposed, by exph)iting these constraints wc can ~how that a Gll-based theory is particularly compatil)le v~idl efficient parsing designs, in particdlar, with extended I I~,(k,t) parsers (uf the sort described by Marcus [1980 D. Wc can extcnd thc I,R(k.t) design to accommodate such phenomena as antecedent-PRO and pronominal binding. Jightward movement, gappiug, aml VP tlcletion.Let us consider how to explain locality constraints in natural languages. First of all, what exactly do we mean by a "locality constraint"? "]'he paradigm case is that of Subjacency: the distance between a displaced constituent and its "underlying" canonical argument position cannot be too large, where the distance is gauged (in English) in terms of the numher of the number of S(entence) or NP phrase boundaries. For example, in sentence (la) below, John (the so-called "antecedent") is just one S-boundary away from its presumably "underlying" argument position (denoted "x", the "trace")) as the Subject of the embedded clause, and the sentence is fine:(la) John seems [S x to like ice cream].However, all we have to do ts to make the link between John and x extend over two S's, and the sentence is ill-formed: (lb) John seems [S it is certain [S x to like ice cream This restriction entails a "successive cyclic" analysis of transformational rules (see Chomsky [1973] ). In order to derive a sentence like (lc) below without violating the Subjacency condition, we must move the NP from its canonical argument position through the empty Subject position in the next higher S and then to its surface slot:(lc) John seems tel to be certain x to get the ice cream.Since the intermediate subject position is filled in (lb) there is no licit derivation for this sentence.More precisely, we can state the Subjacency constraint as follows:No rule of grammar can involve X and Y in a configuration like the following,[ ...x...[,, ...[/r..Y...]...l ...X...]where a and # are bounding nodes (in l.'nglish, S or NP phrases). "Why should natural languages hc dcsigned Lhis way and not some other way? Why, that is, should a constraint like Subjaccncy exist at all? Our general result is that under a certain set of assumptions about grammars and their relationship to human sentence processing one can actually expect the following pattern of syntactic igcality constraints:(l) The antecedent-trace relationship must obey Subjaccncy, but other "binding" realtionships (e.g., NP--Pro) need not obey Subjaccncy.(2) Gapping constructitms must be subject to a bounding condition resembling Subjacency. but VP deletion nced not be.(3) Rightward movemcnt must be stricdy bounded.To the extent that this predicted pattern of constraints is actually observed --as it is in English and other languages --we obtain a genuine functional explanation of these constraints and support for the assumptions themselves. The argument is different from Man:us' because it accounts for syntactic locality constraints (like Subjaceney) ,as the joint effect of a particular theory of grammar, a theory of how that grammar is used in parsing, a criterion for efficient parsability. and a theory of of how the parser is builL In contrast, Marcus attempted to argue that Subjaceney could be derived from just the (independently justified) operating principles of a particular kind of parser.The assumptions we make are the following:(1) The grammar includes a level of annotated surface structure indicating how constituents have been displaced from their canonical predicate argument positions. Further, sentence analysis is divided into two stages, along the lines indicated by tile theory of Government and Binding: the first stage is a purely syntactic analysis that rebuilds annotated surface structure; the second stage carries out the interpretation of variables, binds them to operators, all making use of the "referential indices" of NPs.(2) To be "visible" at a stage of analysis a linguistic representation must be written in the vocabulary of that level. For example, to be affected by syntactic operations, a representation must be expressed in a syntactic vocabulary (in the usual sense); to be interpreted by operations at the second stage, the NPs in a representation must possess referential indices.(This assumption is not needed to derive the Subjaccncy constraint, but may be used to account for another "axiom" of current grammatical theory, the so-called "constituent command" constraint on antecedcnLs and the variables that they hind.) This "visibility" assumption is a rather natural one.(3) The rule-writing vocabulary of the grammar cannot make use of arithmetic predicates such as "one", "two" or "three". but only such predicates as "adjacent". Further, quzmtificational statements are not allowed m rt.les. These two assumptions are also rather standard. It has often been noted that grammars "do not count" --that grammatical predicates are structurally based. There is no rule of grammar that takes the just the fourth constituent of a sentence and moves it, for example. In contrast, many different kinds of rules of grammar make reference to adjacent constituents. (This is a feature found in morphological, phonological, and syntactic rules.) (4) Parsing is no....! done via a method that carries along (a representation) of all possible derivations in parallel.In particular, an Earley-type algorithm is ruled out. To the extent that multiple options about derivations are not pursued, the parse is "deterministic."(5) The left-context of the parse (as defined in Aho and Ullman [19721) is literally represented, rather than generatively represented (as, e.g., a regular set). In particular, just the symbols used by the grammar (S, NP. VP...) are part of the left-context vocabulary, and not "complex" symbols serving as proxies for the set of lefl.-context strings. 1 In effect, we make the (quite strong) assumption that the sentence processor adopts a direct, transparent embedding of the grammar.Other theories or parsing methods do not meet these constraints and fail to explain the existence of locality constraints with respect to thts particular set of assumpuons. 2 For example, as we show, there is no reason to expect a constraint like Subjacency in the Generalized Phrase Structure Grammars/GPSGsl of G,zdar 119811, because there is no inherent barrier to eastly processing a sentence where an antecedent and a trace are !.mboundedly far t'rt~m each other. Similarly if a parsing method like Earlcy's algorithm were actually used by people, than Sub]acency remains a my:;tcry on the functional grounds of efficient parsability. (It could still be explained on other functional grounds, e.g., that oflearnability.)To begin the actual argument then, assume that on-line sentence processing is done by something like a deterministic parser) Sentences like (2) cause trouble for such a parser:(2 2. Plainly. one is free to imagine some other set of assumptions that would do the job.3. If one a.ssumcs a backtracking parser, then the argument can also be made to go through, but only by a.,,,,~ummg that backtracking Ks vcr/co~tlS, Since this son of parser clearly ,,~ab:~umes the IR(kPt,',pe machines under t/le right co,mrual of 'cost". we make the stronger assumption of I R(k)-ncss.The problem is that on recognizing the verb eat the parser must decide whether to expand the parse with a trace (the transitive reading) or with no postverbal element (.the intransitive reading). The ambiguity cannot be locally resolved since eat takes both readings. It can only be resolved by checking to see whether there is an actual antecedent. Further, observe that this is indeed a parsing decision: the machine must make some decision about how to tu build a portion of the parse tree. Finally, given non-parallelism, the parser is not allowed to pursue both paths at once: it must decide now how to build the parse tree (by inserting an empty NP trace or not).Therefore, assuming that the correct decision is to be made on-line (or that retractions of incorrect decisions are costly) there must be an actual parsing rule that expands a category as transitive iff there is an immediate postverbal NP in the string (no movement) or if an actual antecedent is present. However, the phonologically overt antecedent can be unboundedly far away from the gap. Therefore, it would seem that the relevant parsing rule would have to refer to a potentially unbounded left context. Such a rule cannot be stated in the finite control table of an I,R(k) parser. Theretbre we must find some finite way of expressing the domain over which the antecedent must be searched.There are two ways of accomplishing this. First, one could express all possible left-contexts as somc regular set and then carry this representation along in the finite control table of the I,R(k) machine. This is always pu,,;sible m the case of a contcxt-fiee grammar, and m fact is die "standard" approach. 4 However, m the case of (e.g.) ,,h moven!enk this demands a generative encoding of the associated finite state automaton, via the use of complex symbols like "S/wh" (denoting the "state" that a tvtt has been encountered) and rules to pass king this nun-literal representation of the state of the parse. Illis approach works, since wc can pass akmg this state encoding through the VP (via the complex non-terminal symbol VP/wh) and finally into the embedded S. This complex non-terminal is then used to trigger an expansion of eat into its transitive form. Ill fact, this is precisely the solution method advocated by Gazdar. We ~ce then that if one adopts a non-terminal encoding scheme there should he no p,oblem in parsing any single long-distance gap-filler relationship. That is, there is no need for a constraint like Subjacency. s Second, the problem of unbounded left-context is directly avoided if the search space is limited to some literally finite left context. But this is just what the Sttbjacency c(mstraint does: it limits where an antecedent NP could be to an immediately adjacent S or S. This constraint has a StlllpJe intcrprctatum m an actual parser (like that built hy Murcus [19};0 D. l'he IF-THEN pattern-action rules that make up the Marcus parser's ~anite control "transi:ion table" must be finite in order to he stored ioside a machine. The rule actions themselves are literally finite. If the role patterns must be /herally stored (e.g., the pattern [S [S"[S must be stored as an actual arbitrarily long string ors nodes, rather than as the regular set S+), then these patterns must be literally finite. That is, parsing patterns must refer to literally hounded right and left context (in terms of phrasal nodes). 6 Note Further that this constraint depends on the sheer represcntability of the parser's rule system in a finite machine, rather than on any details of implementation. Therefore it will hold invariantly with respect to rnactfine design --no matter kind of machine we build, if" we assume a literal representation of left-contexts, then some kind t)f finiteness constraint is required. The robustness of this result contrasts with the usual problems in applying "efficiency" results to explain grm'~T""'!cal constraints. These often fail because it is difficult to consider all possible implcmentauons simultaneously. However, if the argument is invariant with respect to machine desing, this problem is avoided.Given literal left-contexts and no (or costly) backtracking, the argument so far motivates some bounding condition for ambiguous sentences like these. However, to get the lull range of cases these functional facts must interact with properties of the rule writing system as defined by the grammar. We will derive the litct that the Imunding condition must be ~acency (as opposed to tri-or quad-jaccncy) by appeal to the lhct that grammatical c~m~tramts and rules arc ~tated in a vocabtdary which is non-c'vunmtg. ,',rithmetic predicates are forbidden. But this means that since only the prediu~lte "ad].cent" is permitted, any literal I)ouuding rc,~trict]oi] must be c.xprc,~)cd m tcrlllS of adjacent domains: t~e~;ce Subjaccncy. INert that ",djacent" is also an arithmetic predicate.) l:urthcr. Subjaccncy mu,,t appiy ~.o ,ill traces (not ju',t traces of,mlb=guously traw~itive/imransi[ive vcrb,o in:cause a restriction to just the ambiguous cases would low)ire using cxistentml quantilicati.n. Ouantificatiomd predicates are barred in the rule writing vocabulary of natural grammars. 7Next we extend the approach to NP movement and Gapping. Gapping is particularly interesting because it is difficult ~o explain why this construction (tmlike other deletiou rules) is bounded. That is, why is (3) but not (4) grammatical:(3) John will hit Frank and Bill will [ely P George. *(4)John will hit Frank and I don't believe Bill will [elvpGeorge.The problem with gapping constructions is that the attachment of phonologically identical complements is governed by the verb that the complement follows. Extraction tests show that in {5) the pilrase u/?er M'ao' attaches to V" whde in (6) it attaches to V" (See Hornstem and Wemberg []981] for details.} (5) John will mn aftcr Mary. (6) John will arrivc after Mary.In gapping structures, however, the verb of the gapped constituent ,s not present in the string. Therefore. correct ,lltachrnent o( the complement can only be guaranteed by accessing the antecedent in the previous clause. If this is true however, then the boundlng argument for Suhjacency applies to this ease as well: given deterministic parsing of gapping done correctly, and a literal representation of left-context, then gapping must be comext-bounded. Note that this is a particularly 7 Of course, there zs a anolhcr natural predic.atc Ihat would produce a finite bound on rule context: i[ ~]) alld Irate hod I. bc in tile .ame S donlalll Prc~umahb', lhls is also an Optlllt3 ~l;iI could gel reah,ed in qOII|C n.'Ittlral l~rJoln'iai~: ll'ic resuhing languages would no( have ov,,:rt nlo~.eIIICill OUlside o[ an S. %o(e lllal Lhc naltllal plcdJc;des simply give the ranta¢ of po~edble ndiulal granmlars. ]lot those actually rour~d.The elimination of quanllfil',.llion predic~les is supportable on grounds o(acquisltton.Following the approactl of DcRemer []969], one budds a finHe stale automaton Lhat reco~nl/es exactly Ihe set of i¢[t-(OIIlext strings that cain arise during the course of a right-most derivation, the so-Gilled ch,melert.sllcf'.nife s/ale ClUlOmC~lott. 5 l'laml} the same Imlds for a "hold cell" apploaeh [o compulm 8 filler-gap relallonshipi 6. Actually Uteri. lhJ8 k;nd or device lall!; lllto lJae (~itegoly of bounded contc;~t parsing. a.' defiued b~. I ]oyd f19(.)4].F;hlce ~ ~s ungovcNicd fff a ~ovct'llcd t:~ F;L[:~c, and a go~c,' m~J is a bounded predicate, i hcmg Lcstrictcd Io mu~',dy a ~in~i¢ lllaX1111;il Drojcctlon (at worst al| S). Appendix:
null
null
null
null
{ "paperhash": [ "marcus|a_theory_of_syntactic_recognition_for_natural_language", "floyd|bounded_context_syntactic_analysis" ], "title": [ "A theory of syntactic recognition for natural language", "Bounded context syntactic analysis" ], "abstract": [ "Abstract : Assume that the syntax of natural language can be parsed by a left-to-right deterministic mechanism without facilities for parallelism or backup. It will be shown that this 'determinism' hypothesis, explored within the context of the grammar of English, leads to a simple mechanism, a grammar interpreter. (Author)", "Certain phase structure grammars define languages in which the phrasehood and structure of a substring of a sentence may be determined by consideration of only a bounded context of the substring. It is possible to determine, for any specified bound on the number of contextual characters considered, whether a given grammar is such a bounded context grammar. Such grammars are free from syntactic ambiguity. Syntactic analysis of sentences in a bounded context language may be performed by a standard process and requires a number of operations proportional to the length of sentence analyzed.\nBounded context grammars form models for most languages used in computer programming, and many methods of syntactic analysis, including analysis by operator precedence, are special cases of bounded context analysis." ], "authors": [ { "name": [ "Mitchell P. Marcus" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. W. Floyd" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "6616065", "16285655" ], "intents": [ [], [] ], "isInfluential": [ false, false ] }
null
500
0.012
null
null
null
null
null
null
null
null
2098470b7d21724fd9d9d828e8c3cc0b399576a0
14692027
null
An Improper Treatment of Quantification in Ordinary {E}nglish
In most democratic countries most politicians can fool most of the people on almost every issue most of the time. c = f(Z(R))
{ "name": [ "Hobbs, Jerry R." ], "affiliation": [ null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
10
72
null
In the currently standard ways of representing quantification in logical form, this sentence has 120 different readings, or quantifier scopings. Moreover, they are truly distinct, in the sense that for any two readings, there is a model that satisfies one and not the other. With the standard logical forms produced by the syntactic and semantic translation components of current theoretical frameworks and implemented systems, it would seem that an inferencing component must process each of these 120 readings in turn in order to produce a best reading. Yet it is obvious that people do not entertain all 120 possibilities, and people really do understand the sentence.The problem is not Just that inferencing is required for disamblguation.It is that people never do dlsambiguate completely. A single quantifier scoping is never chosen. (Van Lehn [1978] and Bobrow and Webber [1980] Finally, since the notion of "scope" is a powerful tool in semantic analysis, there should be a fairly transparent relationship between dependency information In the notation and standard representations of scope.approaches are ruled out by these criteria.Representing the sentence as a disjunction of the various readings. This is impossibly unwieldy.Many people feel that most sentences exhibit too few quantifier scope ambiguities for much effort to be devoted to this problem, but a casual inspection of several sentences from any text should convince almost everyone otherwise.Using as the logical notation a triple consisting of an expression of the propositional content of the sentence, a store of quantifier structures (e.g., as in Cooper [1975] , Woods [19781) , and a set of constraints on how the quantifier structures could be unstored. This would adequately capture the vagueness, but it is difficult to imagine defining inference procedures that would work on such an object.Indeed, Cooper did no inferenclng; Woods did little and chose a default reading heuristically before doing so. is existential, with narrow scope it is universal, and a shift in commitment from one to the other would involve significant restructuring of the logical form.The approach taken here uses the notion of the "typical element'" of a set, to produce a flat logical form of conjoined atomic predications. A treatment has been worked out only for monotone increasing determiners; this is described in Section 2.In Section 3 some ideas about other determiners are discussed. An inferenclng component, such as that explored in Hobbs [1976, 1980] , capable of resolving coreference, doing coercions, and refining predicates, will be assumed (but not discussed). Thus, translating the quantifier scoping problem into one of those three processes will count as a solution for the purposes of this paper. In '~ost men work," Q -"most", P = "man", and R -"work".Q will be referred to as a determiner.A determiner Q is monotone increasing if and only if for any RI and R2 such that the denotation of R1 is a subset of the denotation of R2, "Q Ps RI" implies "Q Ps R2" (Barwlse and Cooper [1981] ). For example, letting RI -"work hard" and R2 = "work", since "most men work hard" implies "most men work," the determiner "most" is monotone increasing.Intuitively, making the verb phrase more general doesn't change the truth value.Other monotone increasing determiners are "every", "some", "many", "several", "'any" and "a few"."No" and "few" are not. & (~ y)(y~s -> work(y)))For collective predicates such as "meet" and "agree", R would apply to the set rather than to each of its elements.Sometimes with singular noun phrases and determiners llke "a", "some" and "any" it will be more convenient to treat the determiner as a relation between a set and one of its elements.(B Y) 0(y,{x I P(x)}) & R(y).to notation (i) there are two aspects to quantification. The first, which concerns a relation between two sets, is discussed in Section 2.2. The second aspect involves a predication made about the element~ of one of the sets.The approach taken here to this aspect of quantification is somewhat more radical, and depends on a view of semantics that might be called "ontological promiscuity". This is described briefly in Section 2.3.Then in Section 2.4 the scope-neutral representation is presented.
null
null
Expressing determiners as relations between sets allows us to express as axioms in a knowledge base more refined properties of the determiners than can be captured by representing them in terms of the standard quantlflers.let us note that, with the proper definitions of "every" and "some", (V sl,s2) every(sl,s2) <-> sl= s2 (y x,s2) some(x, s2) <->(~sl,s2) most(sl,s2) -> Isll > i/2 Is21 Next,consider "any". Instead of trying to force an interpretation of "any" as a standard quantifier, let us take it to mean "a random element of".(2) (~x,s) any(x,s) ~> x = random(s),where "random" is a function that returns a random element of a set. This means that the prototypical use of "any" is in sentences like Pick any card.Let me surround this with caveats. This can't be right, if for no other reason than that "any" is surely a more "primitive" notion in language than "random".Nevertheless, mathematics gives us firm intuitions about "random" and (2) may thus shed light on some linguistic facts.Many of the linguistic facts about "any" can be subsumed under two broad characterizations: i. It requires a "modal" or "nondeflnlte" context.For example, "John talks to any woman" must be interpreted dispositlonally. If we adopt (2), we can see this as deriving from the nature of randomness. It simply does not make sense to say of an actual entity that it is random.It normally acts as a universal quantifier outside the scope of the most immediate modal embedder. This is usually the most natural interpretation of "random". Moreover, since "any" extracts a single element, we can make sense out of cases in which "any" fails to act llke "every". I'Ii talk to anyone but only to one person. * I'Ii talk to everyone but only to one person.John wants to marry any Swedish woman. * John wants to marry every Swedish woman.(The second pair is due to Moore [1973] .) This approach does not, however, seem to offer an especially convincing explanation as to why "any" functions in questions as an existential quantifier.Davidson [1967] proposed a treatment of action sentences in which events are treated as individuals.This facilitated the representation of sentences with adverbials.But virtually every predication that can be made in natural language can be modified adverbially, be specified as to time, function as a cause or effect of something else, constitute a belief, be nominalized, and be referred to pronominally. It is therefore convenient to extend Davidson's approach to all predications, an approach that might be called "ontological promiscuity". One abandons all ontological scruples.A similar approach is used in many AI systems.We will use what might be called a "nomlnalization" operator ..... for predicates. Corresponding to every n-ary predicate p there will be an n+l-ary predicate p" whose first argument can be thought of as a condition of p's being true of the subsequent arguments. Thus, if "see(J,B)" means that John sees Sill, "see'(E,J,S)" will mean that E is John's seeing of Bill.For the purposes of this paper, we can consider that the primed and unprimed predicates are related by the following axiom schema:(3) (~ x,e) p'(e,x) -> p(x) (Vx)(~e) p(x) -> p'(e,x)It is beyond the scope of this paper to elaborate on the approach further, but it will be assumed, and taken to extremes, in the remainder of the paper. Let me illustrate the extremes to which it will be taken. Frequently we want to refer to the condition of two predicates p and q holding simultaneously of x. For this we will refer to the entity e such that and' [e,el,e2) & p*(el,x) & q'(e2,x)Here el is the condition of p being true of x, e2 is the condition of q being true of X, and e the condition of the conjunction being true.We will assume that a set has a typical element and that the logical form for a plural noun phrase will include reference to a set and its ~z~ical element. We could get around this problem by positing a special set of predicates that apply to typical elements and are systematically related to the predicates that apply to real elements.This idea should be rejected as being ad ho__~c, if aid did not come to us from an unexpected quarter --the notion of "grain size".predicate, it is normally at some degree of resolution, or "grain". At a fairly coarse grain, we might say that John is at the post office --"at(J,PO)". At a more refined grain, we have to say that he is at the stamp window --"at(J,SW)'"We normally think of grain in terms of distance, but more generally we can move from entities at one grain to entities at a coarser grain by means of an arbitrary partition. Fine-grained entities in the same equivalence class are indistinguishable at the coarser grain.a set S, consider the partition that collapses all elements of S into one element and leaves everything else unchanged.We can view the typical element of S as the set of real elements seen at this coarser grain --a grain at which, precisely, the elements of the set are indistinguishable.Formally, we can define an operator ~ which takes a set and a predicate as its arguments and produces what will be referred to as an "indexed predicate":T, if x=T(s) & (V yes) p(y), <;'(s,p)(x) = F, if x=~(s) &~(F y~s) p(y), p(x) otherwise.We will frequently abbreviate this "P5 " Note that predicate indexing gets us out of the above 3 An alternative approach would be to say that the typical element is in fact one of the real elements of the set, but that we will never know which one, and that furthermore, we will never know about the typical element any property that is not true of all the elements.This approach runs into technical difficulties involving the empty set. contradiction, for now "~(s) E 5 s" is not only true but tautologous.We are now in a position to state the properties typical elements should have. The first implements universal instantiation:(4) (Us,y) p$(~(s)) & yes -> p(y) (5) (Vs)([(¥x~s) p(x)] -> p~(~s)))That is, the properties of the typical element at the coarser grain are also the properties of the real elements at the finer grain, and the typical element has those properties that all the real elements have.Note that while we can infer a property from set membership, we cannot infer set membership from a property. That is, the fact that p is true of a typical element of a set s and p is true of an entity y, does not imply that y is an element of s. After all, we will want "three men" to refer to a set, and to be able to infer from y's being in the set the fact that y is a man. But we do not want to infer from y's being a man that y is in the set. Nevertheless, we will need a notation for expressing this stronger relation among a set, a typical element, and a defining condition.In to the condition e of p (or p$) being true of the typical element x of s --"p~ (e,x)". Expression (6) can then be translated into the following flat predlcate-argument form:(7) set(s,x,e) & p~ (e,x) This should be read as saying that s is a set whose typical element is x and which is defined by condition e, which is the condition of p (interpreted at the level of the typical element) being true of x. The two critical properties of the predicate "set" which make (7) equivalent to (6) are the following: This approach involves a more thorough use of typical elements than two previous approaches. Webber [1978] admitted both set and prototype (my typical element) interpretations of phrases like "each man'" in order to have antecedents for both "they" and "he", but she maintained a distinction between the two. where ml is the set of all men, m the set of most of them referred to by the noun phrase "most men", and w the set referred to by the noun phrase "several women", and where "manl = ~'(ml,man)" and "womanl = ~" (w,woman)'. When the inferenclng component discovers there is a different set w for each element of the set m, w can be viewed as refering to the typical element of this set of sets:To eliminate the set notation, we can extend the definition of the dependency function to the typical element of m as follows:f(~(m)) -Z({f(x) I x~m})That is, f maps the typical element of a set into the typical element of the set of images under f of the elements of the set.From here on, we will consider all dependency functions so extended to the typical elements of their domains. lovel(r(ms),w) & manl(~(ms)) & woman(w)where "lovel -@(mS,Ax[love(x,w)])'" and "manl -(ms,man)". M is the set of men {A,B}, W is the set of women {X,Y}, and the arrows signify love.Let us assume that the process of interpreting this sentence is Just the process of identifying the existentially quantified variables ms and w and possibly coercing the predicates, in a way that makes the sentence true. 4EQUATIONFigure I. Two models of sentence (13).In Figure l(a) , "'love(A,X)" and "love(B,X)" are both true, so we can use axiom schema (5) to derive "lovel('~(M),X)". Thus, the identifications "ms -M'" and "w = X'" result in the sentence being true.Figure l(b), "love(A,X)" and "love(B,Y)" are both true, but since these predications differ 4 Bobrow and Webber [1980] similarly show scoplng information acquired by Interpretatlon against a small model. in more than one argument, we cannot apply axiom schema (5).First we define a dependency function f, mapping each man into a woman he loves, yielding "love(A,f(A))" and "love(B,f(B))". We can now apply axiom schema (5) to derive '" love2 ('~ (M), f (~ (M)) ) ", where"love2 = ~(M,Ax[love(x,f(x))])".Thus, we can make the sentence true by identifying ms with M and w with f(~'(M)), and by coercing "love" to "'love2" and "woman" to "~ (W,woman)". ,In each case we see that the identification of w is equivalent to solving the scope ambiguity problem.In our subsequent examples we will ignore the indexing on the predicates, until it must be mentioned in the case of embedded quantifiers. That is, r arrives, where r is the typical element of a set rs defined by the conjunction ea of r's being a representative and r's being of c, where c is a company. We will consider the two models in Figure 2 . R is the set of representatives {A,B,(C)}, K is the set of companies {X,Y,(Z,W)}, there is an arrow from the representatives to the companies they represent, and the representatives who arrived are circled.(a) (b) Figure 2 . Two models of sentence (14).In Figure 2 (a), "of(A,X)", "of(B,Y)" and "of(B,Z)" are true. Define a dependency function f to map A into X and B into Y. Then "of(A,f(A))" and "of(B,f(B))" are both true, so that "of(~(R),f(~(R)))"is also true. Thus we have the following identifications:c = f(Z(R)) =~({X,Y}), rs = R, r -t(R)In Figure 2 (b) "of(B~" and "of(C,Y)'" are both true, so "'of(~'(Rl),~)is also. Thus we may let c be Y and rs be RI, giving us the wide reading for "a company".the case where no one represents any company and no one arrived, we can let c be anything and rs be the empty set.Since, by the definition of o" , any predicate indexed by the empty set will be true of the typical element of the empty set, "arrlve#(~(# ))" will be true, and the sentence will be satisfied.It is worth pointing out that this approach solves the problem of the classic "donkey sentences".If in sentence (14) we had had the verb phrase "hates it", then "it" would be resolved to c, and thus to whatever c was resolved to.far the notation of typical elements and dependency functions has been introduced; it has been shown how scope information can be represented by these means; and an example of inferential processing acquiring that scope information has been given. Now the precise relation of this notation to standard notation must be specified.This can be done by means of an algorithm that takes the inferential notation, together with an indication of which proposition is asserted by the sentence, and produces In the conventional form all of the readings consistent with the known dependency information.First we must put the sentence into what will be called a "bracketed notation". We associate with each variable v an indication of the corresponding quantifier; this is determined from such pieces of the inferential logical form as those involving the predicates "set" and "most"; in the algorithm below it is refered to as "Quant(v)". to a narrow reading. The third BRANCH corresponds to the decision of how wide a reading to give to an embedded quantifier.Dependency constraints can be built into this algorithm by restricting the elements of its argument that BRANCH can choose.If the variables x and y are at the same level and y is dependent on x, then the first BRANCH cannot choose x.If y is embedded under x and y is dependent on x, then the second BRANCH must choose G(R).In the third BRANCH, if any top-level bracketed variable in Form is dependent on any variable one level of recurslon up, then G(Form) must be chosen.A fuller explanation of this algorithm and several further examples of the use of this notation are given in a longer version of this paper.The approach of Section 2 will not work for monotone decreasing determiners, such as "few" and "no".Intuitively, the reason is that the sentences they occur in make statements about entities other than just those in the sets referred to by the noun phrase. Thus, Few men work.is more a negative statement about all but a few of the men than a positive statement about few of them.One possible representation would be similar to (I), but wlth the implication reversed. This is unappealing, however, among other things, because the predicate P occurs twice, making the relation between sentences and logical forms less direct.Another approach would take advantage of the above intuition about what monotone decreasing determiners convey. That is, we convert the sentence into a negative assertion about the complement of the noun phrase, reducing this case tO the monotone increasing case.For example, "few men work" would be represented as follows: that all such determiners can be expressed as conjunctions of monotone determiners. For example, "exactly three" means "at least three and at most three".If this is true, then they all yield to the approach presented here. Moreover, because of redundancy, only two new conjuncts would be introduced by this method.
null
Main paper: determiners as relations between sets: Expressing determiners as relations between sets allows us to express as axioms in a knowledge base more refined properties of the determiners than can be captured by representing them in terms of the standard quantlflers.let us note that, with the proper definitions of "every" and "some", (V sl,s2) every(sl,s2) <-> sl= s2 (y x,s2) some(x, s2) <->(~sl,s2) most(sl,s2) -> Isll > i/2 Is21 Next,consider "any". Instead of trying to force an interpretation of "any" as a standard quantifier, let us take it to mean "a random element of".(2) (~x,s) any(x,s) ~> x = random(s),where "random" is a function that returns a random element of a set. This means that the prototypical use of "any" is in sentences like Pick any card.Let me surround this with caveats. This can't be right, if for no other reason than that "any" is surely a more "primitive" notion in language than "random".Nevertheless, mathematics gives us firm intuitions about "random" and (2) may thus shed light on some linguistic facts.Many of the linguistic facts about "any" can be subsumed under two broad characterizations: i. It requires a "modal" or "nondeflnlte" context.For example, "John talks to any woman" must be interpreted dispositlonally. If we adopt (2), we can see this as deriving from the nature of randomness. It simply does not make sense to say of an actual entity that it is random.It normally acts as a universal quantifier outside the scope of the most immediate modal embedder. This is usually the most natural interpretation of "random". Moreover, since "any" extracts a single element, we can make sense out of cases in which "any" fails to act llke "every". I'Ii talk to anyone but only to one person. * I'Ii talk to everyone but only to one person.John wants to marry any Swedish woman. * John wants to marry every Swedish woman.(The second pair is due to Moore [1973] .) This approach does not, however, seem to offer an especially convincing explanation as to why "any" functions in questions as an existential quantifier.Davidson [1967] proposed a treatment of action sentences in which events are treated as individuals.This facilitated the representation of sentences with adverbials.But virtually every predication that can be made in natural language can be modified adverbially, be specified as to time, function as a cause or effect of something else, constitute a belief, be nominalized, and be referred to pronominally. It is therefore convenient to extend Davidson's approach to all predications, an approach that might be called "ontological promiscuity". One abandons all ontological scruples.A similar approach is used in many AI systems.We will use what might be called a "nomlnalization" operator ..... for predicates. Corresponding to every n-ary predicate p there will be an n+l-ary predicate p" whose first argument can be thought of as a condition of p's being true of the subsequent arguments. Thus, if "see(J,B)" means that John sees Sill, "see'(E,J,S)" will mean that E is John's seeing of Bill.For the purposes of this paper, we can consider that the primed and unprimed predicates are related by the following axiom schema:(3) (~ x,e) p'(e,x) -> p(x) (Vx)(~e) p(x) -> p'(e,x)It is beyond the scope of this paper to elaborate on the approach further, but it will be assumed, and taken to extremes, in the remainder of the paper. Let me illustrate the extremes to which it will be taken. Frequently we want to refer to the condition of two predicates p and q holding simultaneously of x. For this we will refer to the entity e such that and' [e,el,e2) & p*(el,x) & q'(e2,x)Here el is the condition of p being true of x, e2 is the condition of q being true of X, and e the condition of the conjunction being true.We will assume that a set has a typical element and that the logical form for a plural noun phrase will include reference to a set and its ~z~ical element. We could get around this problem by positing a special set of predicates that apply to typical elements and are systematically related to the predicates that apply to real elements.This idea should be rejected as being ad ho__~c, if aid did not come to us from an unexpected quarter --the notion of "grain size".predicate, it is normally at some degree of resolution, or "grain". At a fairly coarse grain, we might say that John is at the post office --"at(J,PO)". At a more refined grain, we have to say that he is at the stamp window --"at(J,SW)'"We normally think of grain in terms of distance, but more generally we can move from entities at one grain to entities at a coarser grain by means of an arbitrary partition. Fine-grained entities in the same equivalence class are indistinguishable at the coarser grain.a set S, consider the partition that collapses all elements of S into one element and leaves everything else unchanged.We can view the typical element of S as the set of real elements seen at this coarser grain --a grain at which, precisely, the elements of the set are indistinguishable.Formally, we can define an operator ~ which takes a set and a predicate as its arguments and produces what will be referred to as an "indexed predicate":T, if x=T(s) & (V yes) p(y), <;'(s,p)(x) = F, if x=~(s) &~(F y~s) p(y), p(x) otherwise.We will frequently abbreviate this "P5 " Note that predicate indexing gets us out of the above 3 An alternative approach would be to say that the typical element is in fact one of the real elements of the set, but that we will never know which one, and that furthermore, we will never know about the typical element any property that is not true of all the elements.This approach runs into technical difficulties involving the empty set. contradiction, for now "~(s) E 5 s" is not only true but tautologous.We are now in a position to state the properties typical elements should have. The first implements universal instantiation:(4) (Us,y) p$(~(s)) & yes -> p(y) (5) (Vs)([(¥x~s) p(x)] -> p~(~s)))That is, the properties of the typical element at the coarser grain are also the properties of the real elements at the finer grain, and the typical element has those properties that all the real elements have.Note that while we can infer a property from set membership, we cannot infer set membership from a property. That is, the fact that p is true of a typical element of a set s and p is true of an entity y, does not imply that y is an element of s. After all, we will want "three men" to refer to a set, and to be able to infer from y's being in the set the fact that y is a man. But we do not want to infer from y's being a man that y is in the set. Nevertheless, we will need a notation for expressing this stronger relation among a set, a typical element, and a defining condition.In to the condition e of p (or p$) being true of the typical element x of s --"p~ (e,x)". Expression (6) can then be translated into the following flat predlcate-argument form:(7) set(s,x,e) & p~ (e,x) This should be read as saying that s is a set whose typical element is x and which is defined by condition e, which is the condition of p (interpreted at the level of the typical element) being true of x. The two critical properties of the predicate "set" which make (7) equivalent to (6) are the following: This approach involves a more thorough use of typical elements than two previous approaches. Webber [1978] admitted both set and prototype (my typical element) interpretations of phrases like "each man'" in order to have antecedents for both "they" and "he", but she maintained a distinction between the two. where ml is the set of all men, m the set of most of them referred to by the noun phrase "most men", and w the set referred to by the noun phrase "several women", and where "manl = ~'(ml,man)" and "womanl = ~" (w,woman)'. When the inferenclng component discovers there is a different set w for each element of the set m, w can be viewed as refering to the typical element of this set of sets:To eliminate the set notation, we can extend the definition of the dependency function to the typical element of m as follows:f(~(m)) -Z({f(x) I x~m})That is, f maps the typical element of a set into the typical element of the set of images under f of the elements of the set.From here on, we will consider all dependency functions so extended to the typical elements of their domains. lovel(r(ms),w) & manl(~(ms)) & woman(w)where "lovel -@(mS,Ax[love(x,w)])'" and "manl -(ms,man)". M is the set of men {A,B}, W is the set of women {X,Y}, and the arrows signify love.Let us assume that the process of interpreting this sentence is Just the process of identifying the existentially quantified variables ms and w and possibly coercing the predicates, in a way that makes the sentence true. 4EQUATIONFigure I. Two models of sentence (13).In Figure l(a) , "'love(A,X)" and "love(B,X)" are both true, so we can use axiom schema (5) to derive "lovel('~(M),X)". Thus, the identifications "ms -M'" and "w = X'" result in the sentence being true.Figure l(b), "love(A,X)" and "love(B,Y)" are both true, but since these predications differ 4 Bobrow and Webber [1980] similarly show scoplng information acquired by Interpretatlon against a small model. in more than one argument, we cannot apply axiom schema (5).First we define a dependency function f, mapping each man into a woman he loves, yielding "love(A,f(A))" and "love(B,f(B))". We can now apply axiom schema (5) to derive '" love2 ('~ (M), f (~ (M)) ) ", where"love2 = ~(M,Ax[love(x,f(x))])".Thus, we can make the sentence true by identifying ms with M and w with f(~'(M)), and by coercing "love" to "'love2" and "woman" to "~ (W,woman)". ,In each case we see that the identification of w is equivalent to solving the scope ambiguity problem.In our subsequent examples we will ignore the indexing on the predicates, until it must be mentioned in the case of embedded quantifiers. That is, r arrives, where r is the typical element of a set rs defined by the conjunction ea of r's being a representative and r's being of c, where c is a company. We will consider the two models in Figure 2 . R is the set of representatives {A,B,(C)}, K is the set of companies {X,Y,(Z,W)}, there is an arrow from the representatives to the companies they represent, and the representatives who arrived are circled.(a) (b) Figure 2 . Two models of sentence (14).In Figure 2 (a), "of(A,X)", "of(B,Y)" and "of(B,Z)" are true. Define a dependency function f to map A into X and B into Y. Then "of(A,f(A))" and "of(B,f(B))" are both true, so that "of(~(R),f(~(R)))"is also true. Thus we have the following identifications:c = f(Z(R)) =~({X,Y}), rs = R, r -t(R)In Figure 2 (b) "of(B~" and "of(C,Y)'" are both true, so "'of(~'(Rl),~)is also. Thus we may let c be Y and rs be RI, giving us the wide reading for "a company".the case where no one represents any company and no one arrived, we can let c be anything and rs be the empty set.Since, by the definition of o" , any predicate indexed by the empty set will be true of the typical element of the empty set, "arrlve#(~(# ))" will be true, and the sentence will be satisfied.It is worth pointing out that this approach solves the problem of the classic "donkey sentences".If in sentence (14) we had had the verb phrase "hates it", then "it" would be resolved to c, and thus to whatever c was resolved to.far the notation of typical elements and dependency functions has been introduced; it has been shown how scope information can be represented by these means; and an example of inferential processing acquiring that scope information has been given. Now the precise relation of this notation to standard notation must be specified.This can be done by means of an algorithm that takes the inferential notation, together with an indication of which proposition is asserted by the sentence, and produces In the conventional form all of the readings consistent with the known dependency information.First we must put the sentence into what will be called a "bracketed notation". We associate with each variable v an indication of the corresponding quantifier; this is determined from such pieces of the inferential logical form as those involving the predicates "set" and "most"; in the algorithm below it is refered to as "Quant(v)". to a narrow reading. The third BRANCH corresponds to the decision of how wide a reading to give to an embedded quantifier.Dependency constraints can be built into this algorithm by restricting the elements of its argument that BRANCH can choose.If the variables x and y are at the same level and y is dependent on x, then the first BRANCH cannot choose x.If y is embedded under x and y is dependent on x, then the second BRANCH must choose G(R).In the third BRANCH, if any top-level bracketed variable in Form is dependent on any variable one level of recurslon up, then G(Form) must be chosen.A fuller explanation of this algorithm and several further examples of the use of this notation are given in a longer version of this paper. other determlners: The approach of Section 2 will not work for monotone decreasing determiners, such as "few" and "no".Intuitively, the reason is that the sentences they occur in make statements about entities other than just those in the sets referred to by the noun phrase. Thus, Few men work.is more a negative statement about all but a few of the men than a positive statement about few of them.One possible representation would be similar to (I), but wlth the implication reversed. This is unappealing, however, among other things, because the predicate P occurs twice, making the relation between sentences and logical forms less direct.Another approach would take advantage of the above intuition about what monotone decreasing determiners convey. That is, we convert the sentence into a negative assertion about the complement of the noun phrase, reducing this case tO the monotone increasing case.For example, "few men work" would be represented as follows: that all such determiners can be expressed as conjunctions of monotone determiners. For example, "exactly three" means "at least three and at most three".If this is true, then they all yield to the approach presented here. Moreover, because of redundancy, only two new conjuncts would be introduced by this method. : In the currently standard ways of representing quantification in logical form, this sentence has 120 different readings, or quantifier scopings. Moreover, they are truly distinct, in the sense that for any two readings, there is a model that satisfies one and not the other. With the standard logical forms produced by the syntactic and semantic translation components of current theoretical frameworks and implemented systems, it would seem that an inferencing component must process each of these 120 readings in turn in order to produce a best reading. Yet it is obvious that people do not entertain all 120 possibilities, and people really do understand the sentence.The problem is not Just that inferencing is required for disamblguation.It is that people never do dlsambiguate completely. A single quantifier scoping is never chosen. (Van Lehn [1978] and Bobrow and Webber [1980] Finally, since the notion of "scope" is a powerful tool in semantic analysis, there should be a fairly transparent relationship between dependency information In the notation and standard representations of scope.approaches are ruled out by these criteria.Representing the sentence as a disjunction of the various readings. This is impossibly unwieldy.Many people feel that most sentences exhibit too few quantifier scope ambiguities for much effort to be devoted to this problem, but a casual inspection of several sentences from any text should convince almost everyone otherwise.Using as the logical notation a triple consisting of an expression of the propositional content of the sentence, a store of quantifier structures (e.g., as in Cooper [1975] , Woods [19781) , and a set of constraints on how the quantifier structures could be unstored. This would adequately capture the vagueness, but it is difficult to imagine defining inference procedures that would work on such an object.Indeed, Cooper did no inferenclng; Woods did little and chose a default reading heuristically before doing so. is existential, with narrow scope it is universal, and a shift in commitment from one to the other would involve significant restructuring of the logical form.The approach taken here uses the notion of the "typical element'" of a set, to produce a flat logical form of conjoined atomic predications. A treatment has been worked out only for monotone increasing determiners; this is described in Section 2.In Section 3 some ideas about other determiners are discussed. An inferenclng component, such as that explored in Hobbs [1976, 1980] , capable of resolving coreference, doing coercions, and refining predicates, will be assumed (but not discussed). Thus, translating the quantifier scoping problem into one of those three processes will count as a solution for the purposes of this paper. In '~ost men work," Q -"most", P = "man", and R -"work".Q will be referred to as a determiner.A determiner Q is monotone increasing if and only if for any RI and R2 such that the denotation of R1 is a subset of the denotation of R2, "Q Ps RI" implies "Q Ps R2" (Barwlse and Cooper [1981] ). For example, letting RI -"work hard" and R2 = "work", since "most men work hard" implies "most men work," the determiner "most" is monotone increasing.Intuitively, making the verb phrase more general doesn't change the truth value.Other monotone increasing determiners are "every", "some", "many", "several", "'any" and "a few"."No" and "few" are not. & (~ y)(y~s -> work(y)))For collective predicates such as "meet" and "agree", R would apply to the set rather than to each of its elements.Sometimes with singular noun phrases and determiners llke "a", "some" and "any" it will be more convenient to treat the determiner as a relation between a set and one of its elements.(B Y) 0(y,{x I P(x)}) & R(y).to notation (i) there are two aspects to quantification. The first, which concerns a relation between two sets, is discussed in Section 2.2. The second aspect involves a predication made about the element~ of one of the sets.The approach taken here to this aspect of quantification is somewhat more radical, and depends on a view of semantics that might be called "ontological promiscuity". This is described briefly in Section 2.3.Then in Section 2.4 the scope-neutral representation is presented. Appendix:
null
null
null
null
{ "paperhash": [ "vanlehn|determining_the_scope_of_english_quantifiers" ], "title": [ "Determining the Scope of English Quantifiers" ], "abstract": [ "Abstract : One can represent the meaning of English sentences in a formal logical notation such that the translation of English into this logical form is simple and general. This report covers a particular kind of meaning, namely quantifier scope, and for a particular part of the translation, namely the syntactic influence on the translation. Three different logical forms are presented, and their translation rules are examined. One of the logical forms is predicate calculus. The translation rules for it were developed by Robert May (may 1977). The other two logical forms are Skolem form and a simple computer programming language. The translation rules for these two logical forms are new. All three sets of translation rules are shown to be general, in the sense that the same rules express the constraints that syntax imposes on certain other linguistic phenomena. For example, the rule that constrain the translation into Skolem form are shown to constrain definite np anaphora as well. A large body of carefully collected data is presented, and used to assess the empirical accuracy of each of the theories." ], "authors": [ { "name": [ "K. VanLehn" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "122721053" ], "intents": [ [] ], "isInfluential": [ false ] }
null
500
0.144
null
null
null
null
null
null
null
null
f6f4ecc100ba4afc8530cdd024b56b1599d07c5d
10031076
null
A Prolegomenon to Situation Semantics
An attempt is made to prepare Computational Linguistics for Situation Semantics.
{ "name": [ "Israel, David J." ], "affiliation": [ null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
20
4
null
The editors of the AI Journal recently hit upon the nice notion of correspondents' columns.The basic idea was to solicit experts in various fields, both within and outside of Artificial Intelligence, to provide "guidance to important, interesting current literature" in their fields. For Philosophy, they made the happy choice of Dan Dennett;for natural language processing, the equally happy choice of Barbara Grosz. Each has so far contributed one column, and these early contributions overlap in one, and as it happens, only one, particular;to wit: Situation~manties. Witness Dennett:""~ t ~oplcln " Cis] the hottest new philosophical loglc... [is] in some ways a successor or rival to Montague semantics.In recent work, Barwlse and Perry address the probZem [of what information from the context of an utterance affects which aspects of interpretation and how?] in the context of a proposed model theory of natural language, one that appears to be more compatible with the needs of AI than previous theories .... EI]t is of interest to work in natural-language processing for the kind of compositional semantics it ~ roposes, and the way in which it allows he contexts in which in an utterance is used to affect its interpretation.What is all the fuss about?I want to address this question, but rather indirectly.I want to situate situation semantics in "conceptual space" and draw some comparisons and contrasts between it and accounts in the style of Richard Montague.To this end, a few preliminary points are in order.First, as to the state of the Situation Semantics literature. There is as yet no published piece of the scope and detail of either "English as a Formal Language" or "The Proper Treatment of Ouantlficatlon in Ordinary English".Nor, of course, is there anything llke ~hat large body of work by philosophers and linguists -computational and otherwise -that has been produced from within the Montague paradigm. Montague's work was more or hess the first of its kind.It excited, quite justifiably, an extraordinary amount of interest t and has already inspired a distinguished body or work, some of it from within AI and Computational Linguistics.The latter can hardly be said for Moreover there is in the works a ceiiaboratlve effort, to be called Situations andS. m This will contain a "Fragment of Situation Semantics", a treatment of an extended fra~ent of ~.Last, for the moment, but not least, is a second book by Barwise and Perry, ~ ~, which will include a treatment of an even more extended fragment of English, together with a self-contalned treatment of the technical, mathematical background.(By "self-contalned".understand: not requiring either familiarity with or acceptance of The Big Picture ~ resented in S&A.) The bottom line: there is very Ittle of Situation Semantics presently available to the masses of hungry researchers.There are important points of similarity between Situation and Montague semantics, of course. One is that both are committed to formulating mathematically rigorous semantic accounts of English. To this end, both, of course, dip heavily into set theory.But this isn't saying a whole lot; for they deploy very different set theories. Montague, for a variety of technical reasons, was very fond of MKM, a very powerful theory, countenancing huge collections.MKM allows for both sets and (proper) classes, the latter being collections too big to be elements of other collections, and too big to be sets, say, of ZF. It also provides an unnervingly powerful comprehension axiom.B&P, on the other hand, have at least provisionally adopted KPU, a surprisingly weak set theory.Indeed, the vanilla version of KPU comes without an axiom of infinity and (more or less hence) has a model in the hereditarily finite sets.In that setting, even little infinite coliectlons, llke the universe of hereditarily finite sets, are proper classes, and beyond the pale.Enough for the moment of set theory, although we shall have to return to this strange land for one more brief visit.More important, and perhaps more disheartening, similarities are immediately to hand. Both Montague and B&P -thus far -restrict themselves to the declarative fragment of English; Montague, for the obvious reason that he was a model theorist and a student of Tarskl.For such types, the crucial notion to be explicated is that of "truth mThe collaborators being B&P, Robin Cooper, Hans Kamp, and Stanley Peters. of a sentence on an interpretation".Monta~e showed no interest in the use(s) of lar~Euage.Of course people working within his tradition are not debarred from doing so; but any such interest is an extra added attraction.The same point about model theory, broadly construed, holds for Barwlse-Perry as well; they certainly aren't syntaeticians.But in their case it is reinforced by philosophical considerations which point toward the use of language to convey information as the central use of language -hence, to assertlng as the central kind of utterance or speech act.Thus, even when they narrow their sights to this one use, the notion that language is something to be put to various uses by humans to further certain of their purposes is not foreign to Situation Semantics. • Second, B&P (again: so far) stop short at the awesome boundary of the period.Here again, this was only to be expected; and here again, the crucial question is whether their overall philosophical perspective so informs their account of natural language as to enable a more fruitful accommodation of work on various aspects of extended discourse.Barbara Grosz hints at a suspicion I share, that although at the moment much of what we have in this regard are promissory notes and wishfulthinking, the answer is in the affirmative, me attempting to apply (newly) orthodox mathematical techniques to the solution of classical problems in the semantics of natural languages, many of which had to do with intensional contexts. After all, these new techniques -in the development of which Montague played a role -had precisely to do with the treatment of formal languages containing modal and other intensional constructions.What made a fragment of English of interest to Montague, then, was that it contained loads of such contexts. It is as if all of that wondrous machinery, and the technical brilliance to deploy it. were aimed at an analysis of the following sentence: While the was ~ ~seemed to be lookln~ for ~ unicorn who was thinkinK ~ ~ centaur. What is astounding, of course, is that Montague should have been able to pull a systematic and rigorous treatment of such contexts out of the model-theoretlc hat.When we turn to Situation Semantics, on the other hand, we seem to be back in the linguistic world of flrst-grade readers: Spot ran. ~ saw ~run.Jane~ that SPot ran. rndee~, t~ malor concern of Barwise-Perry is not the semantics of natural language at all.They have bigger (well, different) fish to fry. First and foremost, they are concerned with sketching an account of the place of meaning and mind in the universe, an account that finds the source of meaning in nomic regularities among kinds of events (sltuatlons)L regularltles which, in general, are independent of Aar~uage anu mind.For the frying of said fish, a treatment of cognitive attitudes is essential. Moreover, and not independently, for any attempt to apply their overall philosophical picture to the semantics of natural language, the propositional attitude contexts pose a crucial and seemingly I"A Fragment of Situation Semantics, will contain a treatment of certain kinds of English interrogatives ; further out in the future, Situation ~ will contain such a more extensive treatment.eeBreaking out of the straightjacket of the sentence is the job of Situations in Discourse. insuperable obstacleo tee Hence the fact that the book ~ and Attitudes precedes Situation -the first lays the philosophical foundations for the second.Thus the origin of their concern even with the classical problems of the propositional attitudes is different from. though by no means incompatible with, that of Montague's. Something brief must now be said about ~-big picture. Here goes.The work of B&P can be seen as part of a continuing debate in philosophy about the source of the intentlonallty of the mental -and the nature of meaning in general; a debate about the right account to give of the phenomenon of one thing or event or state-of-affalrs being able to represent (carry information about) another thlr~ or event or state-of-affalrs.On one side stand those who see the phenomenon of Intentionallty as dependent on language -no representation without notation. This doctrine is the heart of current orthodoxy in both philosophy of mind and meta-theory of cognitive psychology.(See, by way of best example, [5] :) It is also a doctrine widely thought to oe presupposed b~ the whole endeavor of Artificial Intelligence.On another side are those who see the representational power of language as itself based on the intentlonallty of mlnd. It The striking thing about Barwise and Perry is that, while they stand firmly with those who deny that meaning and intentlonality essentially involve language, they reject the thesis that intentlonallty and meaning are essentlaliy mental or mind-lnvolvlng.The source of meaning and intentlonallty is to be found, rather, in the existence of lawllke regularities -constraints -among kinds of events. For Barwlse-Perry, the analysis of meaning begins with such facts as that: smoke means fire or those In~t mean measles.The ground of such facts lies e ways of the world; in the regularities between event types in virtue of which events of one type can carry information about events of other types.If semantics is the theory of meaning, then there is no pun intended in the application of semantic notions to situations in which there is no use of language and, indeed, in which there are no minds.Meaning's natural home is the world, for meaning arises out of the regular relations that hold among situations, among bits of reality.We believe linguistic meaning should be seen within this general picture of a world teeming with meaning, a world full of information for organisms appropriately attuned to that meaning, tieThere is yet another dimension to the philosophical debate, one to which Barwise "eeFor an important philosophical predecessor, see [~] . classification of (external) events as derivative .... A second approach is to focus on the external significance of language, on its connection with the described world rather than the describing mind. Sentences are classified not by the ideas they express, but by how they describe thlngs to be .... Frege adopted a third strategy. He postulated a third realm, a realm neither of ideas nor of worldly events, but of senses.Senses are the "philosopher's stone", the medium that coordinates all three elements in our equation: minds, words and objects.Hinds grasp senses, words express them, and objects are referred to by them .... One way of regarding the crucial notion of Intension in possible world semantlos is a development of Frege's notion of sense. [3] Barwlse and Perry clearly opt for the second approach. This is one reason for their concern with the problems posed by the propositional attitudes; for it has often been argued that these contexts doom any attempt at a theory of the second type. This is the burden of the dreaded "Sllngshot" -a weapon we shall ~aze at later. F?r the moment, though, I want simply to note ~ne connection of this dimension with that about the source and nature of intentionality.Just as (some particular features of) a particular X-ray carries information about the individual on which the machine was trained, e.g., that its leg is broken, so too does an utterance by the doctor of the sentence "It's bone is broken", in a context in which that same individual is what's referred to by • it".One can, of course, learn things about the X-ray and the X-ray machine as well as about the ~ oor patient; Just so, one can learn thlnEs about he doctor from her utterance.In both cases, the ~ ainlng of this ~ information is grounded n certain regularlties, in the one case mechanical, optical and electro-magnetic; in the other, perceptual, cognitive, and socialconventional.More to the point, in all cases the central locus of meaning is a relation, a regularity, between types of situation and the primary focus of significance is an external event or event-type. ~ Now, alas, for that return to set theory. I have studiously avoided telling the reader what situations,, events and/or event-~ypes are. Indeed, I haven't even said which, if any, of these are technical terms of Situation Semantics.Later I shall say enough (I hope) to generate an intuitive feel for situations; still, I have been speaklng freely of the centrality of relations between events or between event-types.Set-thecretlcallyspeaking, such relations are going to be (or be represented by) collections of ordered-palrs. C~llections, but not sets.These collections are proper classes relative to KPU; so, if thls be the last word on the matter, those very regularities so central to the account are not themselves available within the account -that is, they are not (represented by) set-theoretic constructs generated from the primitives by way of the resources of ~PU.For all such constructs are finite, me ~eedless to say, that isn't the last word on the matter.Still, this is scarcely the place for an extenced treatment of the issue; I raise it here simply to drive home a point about that first • Needless to say, we can talk about both minds and mental events and languages and linguistic events~ the key point is simply that a language user is not "really" always talklr~ first and foremost about his/her own mental state.We are not doomed to pathologlcal self-lnvolvement by being doomed to speak an d think.l.Assuming that we stick to an interpretation within the hereditarily finite sets, as we can. similarity between Montague and Situation Semantics.Montague wanted a very strong backEround theory within which models can be constructed precisely because he didn't want to have to worry about any (size) constraints on such models.B&Pput their money on a very weak set theory precisely because they want there to be such constraints; in particular because they want to erect a certain kind of barrier to the infinite. Obviously, large issues loom on the horizon; let's leave them there.I want now briefly to discuss 3 major aspects of Situation Semantics, aspects in which it differs fairly dramatically from Montague semantics. In passing.I will at least j~.. at the interrelationships among these, asloe from particular points of difference, remember that in the background there lurks a general conception of the use of language and its place in the overall scheme of things, a conception that is meant to inform and constrain detailed proposals.One other respect in which Barwise and Perry are orthodox is their acceptance of a form of the of , the principle that the meaning of a complex expression is a runctlon of the meanings of its constituents. This is the principle that is supposed to explain the proouctivity or generatlvity of languases, and the ability of finite creatures to master them. So, to adopt their favorite example. if Mitch now says to me, "You're dead wrong", what he says -what he asserts to be the case -is very different from what I would say if I were to utter the very same sentence directed at him. m" The very same sentence is used, "with the same meanir~"; but the message or information carried by its use differs.Moreover, the difference is systematically related to differences in the contexts in which the utterances are made.-Barwise and Perry take this phenomenon, often called indexlcality or token-reflexlvlty and all too often localized to the occurrence of particular words (e.g. t I , you , here , now , this , "that"), to oe of the essence of natural languages. They also note, however, that their relational account of meaning shows it to be a central feature of meaning in general.IT]hat smoke pouring out of the the window over there means that that particular building is on fire.Now the specific situation, smoke pouring out of that very building at that very time, will never be repeated.The next time we see smoke pouring out a building, it will be a new situation, and so will in one sense mean something else. It will mean that the building in the new situation is on fire at the new time. Each of these specific smoky situations means something, that the building then and there is on fire. This is...event meaning. The meaningful situations had something in common, the~ were of a co~n type, smoke pouring out o~ a building, a type that means fire. This is ...event-tYPe meanin~...What a particular case of smoke pouring out of a buildlng means, what it tells us about the mB&P choose to call such principles "semantic universals" -an unhappy choice, I think.JeWhlch, of course, ~ would never do.wider world, is determined by the meaning of smoke pouring out of a building and the particulars of this case of it. [3] Moreover, B&P contend that the fact that modern formal semantics grew out of a concern with the language(s) of matSematics has caused those working within the orthodox model-theoretic tradition either to ignore or to slight this crucial feature.*with the language of mathematics, and with the seemingly e6ernal nature of its sentences, led the founders of our field to neglect the efficiency of language.In our opinion this was a critical blunder, for efficiency lies at the very heart of meaning. Montague adopted a very narrow stance towards issues in pragmatics, concerning himself so*ely with indexicais and tense and not concerning himself at aii with other issues about the purposes of speakers and hearers and the corresponding uses of sentences. **e In addition, the treatment of formal pragmatics was to follow the lead of formal semantics:the central notion to be investigated was that of truth of a sentence, but now reiatlve to both an interpretation and a eontext of use or ~oint of reference.(See [10, 11, 12, 18] .) The working hypothesis" was that one could and should give a thoroughly uniform treatment of indexicallty within the model-theoretic framework deployed for the treatment of the indexlcal-free constructions. Thus, for example, in standard quantificational theory, one of the "parameters" of an interpretation is a domain or universe of discourse; in standard accounts of modal languages, another parameter is a set of possible worlds; in tense logics, a set of points of time.Why stop there?It is clear when we ~et to indexicals that the three parameters I've just mentioned aren't sufficient to determine a function to truth-values. Just think of two simultaneous utterances of "You are dead wrong" in the same world, with all other *Barbara Grosz hints at agreement with this Judgment. " [O] ne place that situation semantics is more compatible with efforts in natural-language processing than previous approaches [is tha£] context and facts about the world participate at two points: (I) in interpretation, for determining such things as who the speaker is, the time of utterance..;(2) in evaluation, for determining such things as..whether the relationships expressed in the utterance hold." **For the former, see [14] , see also [15] . m**Stalnaker is a wonderful example of someone working within the Montague tradition who does take the wider issues of praEmatics to heart. See [19] .things equal except speaker and addressee. In the interests of uniformity, stuff all such parameters into structures called points of reference, and who knows how many we'll need -see [9] , where points of reference are called indices.Then the meaning of a sentence is a function from points of references into truth values.A number of researchers working within the MontaEue tradition (in a sense there was ,,~ ucner) were unhappy with this particular result of Montague's quest for generality; the most important apostate being Kaplan. s There are complex technical issues involved in the apostasy, centrally those involving the interaction of indexical and intenslonal constructions -interactions which, at the very least, cast doubt on the doctrine that the intenslons of expressions are total functions from the set of points of reference to extensions of the expression at that point of reference.** The end result, anyway, is the proposal for some type of a non-unlform two-step account.Montaguesque points of reference should be broken in two, with posslbie worlds (and possibly, moments of time) playing one role and contexts of use (possibly inciudlng moments of time) another, different, role.In this scheme, sentences get associated with functions from contexts of use to propositions and these in turn are functions from contexts to truthvalues.Contexts, upon "application" to utterances of sentences, yield determinate propositions; worlds (world-times) function rather as points of evaluation,yielding truth values of determinate propositions.*** B&P, however, go beyond Kaplan's treatment, and in more than one direction. Cruclaily, the treatment of indexlcailty proper is only one aspect of the account of efficiency, in some ways, the least intriguing of the lot.Still, to drive home the first point: as it is with smoke pouring out of buildings, so too is it with sentences. The syntactic and semantic rules of a language, conventional regularities or constraints, determzne the meaning -the event-type meaning -of a sentence;features of the context of use of an utterance of that type get added in to determine what is actually said with that use. This is the event meaning of the utterance, also called its interpretation.Finally, that interpretation can be evaluated, either in a context which is essentially the same as the context of use, or some other; thereby yielding an evaluation of the utterance, (finally) a truth value.For B&P, the features of the context of use go beyond those associated with the presence of explicit indexical items in the utterance -people with personal pronouns, places with "locatives", times with tense markers and temporal indicators. In particular they mention two such parameters: speaker connections and resource situations. Some aspects of the former can be looked on as aspects of indexicality, following the lines of Kaplan It is a constraint they impose on themselves that they be able to account for significant regularities with respect to "the flow of information", in so far as that flow is mediated by the use of language and in cases where the information is not determined by a compositional semantic theory.And such cases are the norm. Compositionality holds only at the level of eventtype or linguistic meaning.The claim is that seeing linguistic meaning as a special case of the relational nature of meaning -that meaning resides in regularities between kinds of situations allows them to produce an account which satisfies this constraint.$9~" let me say something about proper names and some~nlng ease aoout resource situations.Let us put aside for the moment the semantic type that poor little "David Israel" gets assigned in [13] . Instead, we shall pretend that it gets associated with some individual." But which individual? Surely with one named "David Israel"; but there are bunches of such, and many, many more Davids.The probleml of course, is that proper names aren't proper. ~* Just as surely, at the level of linguistic meaning it makes no sense for me to ~ special treatment with respect to my name. Still, if you (or I) hear M_Itch Marcus. right after my talk, complaining to someone that "David is dead wrong", we'll know who's being maligned.Why so? Because we are aware of the speaker s connections; more finely, of the relevant connections in this instance.At the level of event-type or linguistic meaning, the contribution of a name is to refer to an individual of that name. **'e On the other hand, it is a feature of the context of use, that the speaker of an utterance containing that name is connected in certain ways to such and such individual~ of that name.Surely Mirth knows lots of Davids and we might find him saying "David thinks that David is really dead wrong". Of course, he ~ht be talking about someone inclined to harsh and "oSJectlve" self-crlticiam; ~robably not.Just one more thing about names and speaker connections.I noted above that for B&P, the interpretation of an utterance event underdetermines the information carried by that event.The use of names is a locus of nice examples of this. It is no part of the interpretation (event meaning) of Hitch's complaint about me that my name is "David"; but someone who saw him say this while he (Mitch, that is) was surreptitiously looking can learn that my name is "David", or even t~a~a{am the David Israel who gave the talk on Situation Semantics. Even without that, someone could learn that Mitch knows l is connected with) at least one person so named. Take a wild and wooly sentence such as "The dog is barking".Again, we want the denotations of such definite descriptions to be Just plain individuals; but again, which individuals?Surely, there is more than one dog in the world; does the definite description fail to refer because of non-uniqueness?Hardly; at the level of sentence meaning, there is no question of it's referring to some one individual dog.Rather we must introduce into our semantic account a ~ arameter for a set of resource situations.uppose, for " instance, that we have fixed a speaker, an audience and a (spatio-temporal) location of utterance of our sentence.These three are the main constituents of the parameter B&P call a discourse situation; note that this one parameter ~ retty much covers the contextual features ontague-Kaplan had in mind.Suppose also that a dog t otherwise unknown to our speaker and hls/her audience, just walked by the front porch, on which our protagonists are sitting.When the speaker utters the sentence he/she is exploiting a situation in which bo{h speaker and audience saw a lone dog stroll by; he/she is not describing either that particular recent situation or such a sltuation-type -there may have been many such; the two of them often sit out on that porch, the neighborhood is full of dogs.Rather, the speaker is referring to a situation in which that dog is barking.Which dog?. The one "contributed" by the resource situation; the one who just strolled by. It is an aspect of the linguistic meaning of a definite description that a resource situation should enter into the determination of its reference on a particular occasion of use; thus, an aspect of the meanings of sentences that a resource situation be a a parameter in the determination of the interpretations (event meanings) of sentential utterances.Moreover, one can imagine cases where what is of interest is precisely some feature of which resource situation a speaker is exploiting on a particular occasion.And here, too, as in the case of names or, more generally, of speaker connections, the claim is that the relational theory of meaning and the consequent emphasis on the centrality of the Principle of Efficiency give Situation Semantics a handle on a range of regularities connecting uses of languages with varieties of information that can be conveyed by such uses.As we have noted, Barwlse and Perry's treatment of efficiency goes beyond indexlcality and, as embedded within their overall account, goes well beyond a Kaplan-Montague theory. An important theme in this regard is the radical de-emphaslzlng of the role of entailment in their semantic theory and the correlative fixing on statements, not sentences, as the primary locus of interpretation. This is yet another way in which B&P go beyond Kaplan's forays beyond Montague. I have said that in standard (or even mildly deviant) model-theoretic accounts the key notion is that of truth on an interpretation, or in a model. Having said this, I might as well say that the key notion is that of entailment or logical consequence.A set of sentences S entails a sentence A iff there is no interpretation on which all of the sentences in S are true and A i3 false. From the purely model-theoretic point of view, this relation can be thought of as holding not between sentences, but between propositions (conceived of as the intenslons or meanings of sentences). For instance, it might be taken to hold between sets of possible worlds.Still, it is presumed (to put it mildly)that an important set of such relations among non-linguistic objects have syntactic realizations in relations holding among sentences which express those propositions.Moreover, that sentences stand in these relations is a function of certain specifiable aspects of their syntactic type -their "logical form".artificial, logical languages, this presumption of syntactic realization can be made more or less good; and anyway, the connections between, on the one hand, syntactic types and modes of composition, and semantic values on the other, must be made completely explicit.In particular, one specifies a set of expresslons as the logical constants of the language, specifies how to build up complex expressfons by the use of those constants, operating ultimately on the "non-logical constants", and then -ipso facto -one has a ~ erfectly usable and precise notion of loglcai orm.In the standard run of such artificial languages, sentences (that is: sentence types, there being no need for a notion of tokens) can be, and typically are, assigned truth-values as their semantic values.Such languages do not allow for indexicality; hence the talk about "eternal sentences".The linguistic meaning of such a sentence need not be distinguished from the ~roposltion expressed by a partlcular use of it.* unce Inuexicality is taken seriously, one can no longer attribute truth-values to senhences.(Note how this way of putting things suggests Just the unification of the treatment of indexlcallty with that of modality that appealed to Montague.) One can still, however, take as central the notion of a sentence being true in a context on an interpretation.The main reason for this move is that it allows one to develop a fairly standard notion of logical consequence or entailment at the level of sentences.Roughly, a set of sentences S entails a sentence A iff for every interpretation and for every context of use of that interpretation: if every sentence in S is true in a given context, then so too is A.are prepared to deemphaslze radically the notion of entaliment among sentences. As they fully realize they must provide a new notion -a notion of one statement following from another.At the very least then, our theory will seek to account for why the truth of certain ~ follows from the truth of other 9_~.This move has several important consequences...There is a lot of information available from utterances that is simply missed in traditional accounts, accounts that ignore the relational aspect of meaning...A semantic theory must go far beyond traditional "patterns of inference"...A rather startllng consequence of this is that there can be no syntactic counterpart, of the kind traditionally sought in proof theory and theories of loglcal form, to the semantic theory of consequence.For consequence is simply not a relation between purely syntactic elements. *Hence part, at least, of the oddity of talk about using such a language by uttering sentences thereof.What's at stake here?A whole lot, I fear. First, utterances -e.g., the makings of assertions -are actions.They are not linguistic items at all; they have no logical forms.Of course, they typically involve the production of linguistic tokens, which -by virtue of being of such and such types -may have such forms.(Typically, but not always -witness the shaking or nodding of a head, the winking of an eye, the pointing of a finger, all in appropriate contexts of use, of co,, ~e.) Thus, entailment relations among s~acements (utterances) can't be cashed in directly in terms of relations holding among sentences in virtue of special aspects of their syntactic shape.Remember what was said above about the main reason for opting out of an account based on statements and for an account based on sentence(type)-in-acontext.If you don't remember, let me (and David Kaplan) remind you:First, it is important to distinguish an utterance from a sentence-ln-~-context. The former notion is from the theory of speech acts, the latter from semantics. I Utterances take time, and utterances of distinct sentences can not be simultaneous (i.e., in the same context).But in order to develop a logic of demonstratives it seems most natural to be able to evaluate several premisses and a conclusion all in the same context. [8] . (The emphasis by way of underlining is mine -D.I.)A logic has to do with entailment and validity; these are the central semantic notions; sentences are their linguistic loci. This all sounds reasonable enough, except of course for that quite unmotivated presumption that contexts of use can't be spatio-temporally extended. And it seems correspondingly unreasonable when B&P opt out. IT]he ~ "Socrates is speaking" does not follow from the sentences "Every philosopher is speaking", "Socrates is a philosopher" even though this argument has the same "loglcal form" (on most accounts of logical form) as ["4 is an integral multiple of 2", "All integral multiples of 2 are even" (so) "4 is even".]In the first place, there is the matter of tense. At the very least the three sentences would have to be said at more or less the same time for the argument to be valid. Sentences are not true or false; only statements made with indicative sentences, utterances of certain kinds, are true or false.[3] (The example is mine -D.I.) B&P simplify somewhat.It is not required that all three sentences be uttered simultaneously (by one speaker).Roughly speaking, what is required is that the (spatio)temporal locations of their utterance be close together and that the "sum" of their locations overlap with that of some utterance of Socrates.But that isn't all.The speaker must be connected throughout to one and the same individual Socrates, else a pragmatic analogue of the fallacy of equivocation will result. The same (or something similar) could be said about the noun ~ hrase "every philosopher", for such phrases -just ike definite descriptions -require for their interpretation a resource situation.One can imagine a case wherein a given speaker, over a specified time and at a specified place, connected to one and the same guy named Socrates, exploits two different resource situations contributing two different groups of philosophers, one for each of *Thls is what is known in the trade as a stlpulatlve definition. the first two utterances.(The case is stronger, of course, if we substitute for the second sentence "Socrates is one of the philosophers.") It must certainly seem that too much of the baby is being tossed out with the water; but there are alleged to be (compensating?) gains:There is a lot of information available from utterances that is simply missed in traditional accounts, accounts that ignore the relational aspect of meaning. If someone comes up to me and says--Melanie saw a bear." I may learn not Just that Melanie saw a bear, but also that the speaker is somehow connected to Melanie in a way that allows him to refer to her using "Melanie".And I learn that the speaker is somehow in a position to have information about what Melanie saw.A semantic theory must go far beyond traditional "patterns of inference"to account for the external significance of language...A semantic theory must account for how language fits into the general flow of information.The capturing of entailments between statements is Just one aspect of a real theory of the information in an utterance.We think the relation theory of meaning provides the proper framework for such a theory.By looking at linguistic meanir~ as a relation between utterances and described situations, we can focus on the many coordinates that allow information to be extracted from utterances, information not only about the situation described'ni but also about the speaker and her place the world.[3]A. A ~U.t~ Ã Despite the heroic sentiments just expressed, B&P scarcely eschew sentences, a semantic account account of which they are, after all, aiming to provide.In the formal account statements get represented by n-tuples (of course), one element of which is the sentence uttered; and if you like. it is the sentence-under-syntactic-analysls.(This last bit is misleading, but not terribly.) Other elements of the tuple are a discourse situation and set of speaker connections and resource situations. Any%ray, there is the sentence.Given that, how about their logical form~q?Before touching on that issue, let me raise another and related feature of the account. This is the decision of B&P to let English sentences be the domain of their purely compositional semantic functions. For Montague, the "normal form" semantic interpretation of English went by way of a translation from English into some by now "fairly standard" logical language.(Such languages became fairly standard largely due to Montague's work.) Montague always claimed that thl3 was merely a pedagogical and simplifying device; and he provldeS an abstract account of how a "direct" semantic interpretation would go. Still, his practice leaves one with the taste of a search for hidden logical forms of a familiar type underlying the grammmtical forms of English sentences.No such intermediate logical language is forthcoming in Situation Semantics.First there is ALIASS:An Artificial Language for Illustrating Aspects of Situation Semantics... has more of the structure of English than any other artificial language we know, but it does not pretend to be a fragment of English, or any sort of "logical form" of English.It is Just what its name implies and nothing more.Next, and centrally, there is English. The decision to present a semantic theory of English directly may make the end product look even more different than it is.It certainly has the effect of depriving us of those familiar structures for which familiar "theorem provers" can be specified, and thus reinforces the sense of loss for seekers after a certain brand of entailments. Some may already feel the tell tale symptoms of withdrawal from an acute addiction.There is, however, more to it than that -or maybe the attendant liberation is enough. For instance, are English quantifiers logical constants, and if so, which ones?Which English quantlfiers correspond to which "formal" quantiflers? • Is there really a sententlai negation operator in English?Well, surely nit is not the case that" seems to qualify; but how about "not"? And how about conjunction?Consider, for example, a statement made with the sentence (I) Joe admires Sarah and she admires him. Let us confine our attention to the utterances in which (I) has the antecedent relations indicated by (I') Joe-1 admires Sarah-2 and she-2 admires him-1.While But this is not true of arbitrary statements.Moreover, as in the case above, if we have a [sic] conjunctive statement, there may be no coherent decomposition of it into two independent statements.Talk of conjunctive and especially of disjunctive statements is likely to be wildly misleading. For the latter suggests, quite wrongl[, that the utterer is either asserting one "dis3unct" or the other."A statement made using a disjunctive sentence is not the disjunction of two separate statements." ([3] .)In an appendix to "Situations and Attitudes", B&P suggest an analogue of propositional logic for statements within a very simple fragment of ALIASS. There is no (sentential) negation and no conditional; but more to the point, there are no unrestricted laws of statement entailment, e.g., between an arbitrary "conjunctive statement" and its two "conjunots".Things get even worse when we add complex noun phrases to the fragment.The mind boggles. That is, the only contribution made by a sentence, so embedded, to the whole can be its truth-value.In fact, the slingshot is not a "knockdown proof"; that it is not is recognized by many of its major slingers(?).(See, for instance, L16, 17].) Instead, in all of its forms, it rests on some form or other of two critical assumptions: Here, too, especially with respect to the second assumption, tricky technical issues about the treatment of singular terms -both simple and complex -in a standard logic with identity are involved.B&P purposefully ignore these issues. They are interested in English, not in sentences of a standard logic with identity; and anyway, those very same issues actually get "transformed" into precisely the issues about singular terms they do discuss, issues having to do with the distinction between referential and attributive uses of (complex) singular terms.(See their discussion in [2] and chapter 7 of [3].)To show my strength of character, I'm not going to discuss the sexy issue of transparency to substitution of singular terms except to say that, like Montague, B&P want a uniform treatment of singular terms as these occur both inside and outside of propositional attitude contexts; and that they also want to have it that the denotations of such terms are Just plain individual objects.(How perverse[) Rather, I want to look briefly at the first assumption about IThere is a class of exceptions to this, but I want not to get bogged down in details here. logical equivalence, i*With respect to the end-result, what's crucial is that B&P reject the alleged central consequence of the slingshot: that the primary semantic value of a sentence is its truth-value.Of course, given what we have already said, a better way to ~uc this is that for them, although statements are bearers of truth-values, the primary semantic value of a statement is not its truth value.honor is accorded to a collection of situations or events.Very roughly, the story goes like this: the syntactic and semantic rules of the language associate to each sentence type a type of situations or states-of-affalrs;intuitively, the type actualizations of which would be accurately, though partially, described by any statement made using the sentence.* Thus:Consider the sentence "I am sitting". Its meaning is, roughly, a relation that holds between an utterance ~ and a situation ~ Just in case there is a (spatio-temporal) location 1 and an individual ~, i is speaking at i. and in ~, is sitting at i .... The extension of this relation will be a larKe class of pairs of abstract situations.[3]-Now consider a particular utterance of that sentence, say by Mitch, at a specific location i'.any situation that has [Mitch] sitting at i' will be an interpretation of the utterance. An utterance usually describes lots of different situations, or at any rate partially describes them. Because of this, it is sometimes useful to think of the interpretation as the class of such situations.Then we can say that the situations appearing in the interpretation of our utterance vary greatly in how much they constrain the world...When uttered on a specific occasion, our sentence constrains the described situation to be a certain way, to be llke one of the situations in the interpretation.Or, one might say, it constrains the described situation to be one of the interpretations.[3] B. On Lo~IcalEcuivalence If the primary semantic value of a sentence is a collection or a type of situations, then it is not surprising that logically equivalent sentences -sentences true in the same models -might not have the same semantic values, and hence, might not mmOne point to make, though, is the following: the indexical personal pronouns are certainly singular terms. Frege's general line on the referential opacity of propositional attitude contexts certainly seems at its shakiest precisely in appiicatlon to such pronouns -and in general to indexical elements.And remember if B&P are right, there is an element of "indexicality" in the use of proper names.If Mitch believes that David is dead wrong and I'm (that) David, then Mitch believes that I'm dead wrong.If Mitch believes that I'm dead wrong and I am David Israel. then Mitch believes that (this) David Israel is wrong. [14, 15] ml should note that neither "situation" nor "event" is a technical term in Situation Semantics; though "event-type" is . be intersubstitutable salvo semantic value. Consider the two sentences: (I) Joe eats and (2) Joe eats, and Sarah sleeps or Sarah doesn't sleep. Let's grant that (I) and(2) are logically equivalent.But do they have the same "referent" or semantic value?If we think that sentences stand for situations..then we will not be at all inclined to accept the first principle required in the slingshot. The two logically equivalent sentences just do not have the same subject matter, they do not describe situations involving the same objects and properties.The first sentence will stand for all the situations in which Joe eats, the second sentence for those situations in which Joe eats and Sarah sleeps plus those in which Joe eats and Sarah doesn't sleep.Sarah is present in all of these.Since she is not present in may of the situations that "Joe eats" stands for, these sentences, though logically equivalent, do not stand for the same entity.(Obviously B&P are here ignoring the "indexlcality" inherent in proper uses of proper names -D.I.) [3] Notice that without so much as a glance in the direction of a single propositional attitude context, we can see how B&P can avoid certain wellknown troubles that plague the standard modeltheoretic treatments o~ such constructions.* Moreover and most importantly, they gain these fine powers of discrimination among "meanings" without following either Frege into a third realm of sense or Fodor (?) deep into the recesses of the mind. The significance of sentences, even as they occur in propositional attitude contexts, is out into the surrounding world, t*What's the bottom line?Clearly, it's too soon to say.Indeed, I assume many of you will simply want to wait until you can look at least at some treatment of some fragment of English. Others would llke as well to get some idea of how the project of Situation Semantics might be realized computationally.For instance, it is clear even from what little I've said that the semantic values of various kinds of expression types are going to be quite different from the norm and much thought will be needed to specify a formalism for representing and manipulating these representations adequately.Again, wouldn't it be nice to be told something at least about the metaphysics of Situation Semantics, about situations, abstract, actual, factual and real -all four types figure in some way in the account; about events, event-types, courses-of-events, schema, etc?Yes, it would be nice.Some, no doubt, were positively lusting after the scoop on how B&P handle the classic ~ uzzles of intensionality with respect to singular erms. And so on.All in good time.What I want to do, instead, is to end with a claim, Barbara Grosz's claim in fact, that At the moment, the bottom line with respect to Situation Semantics is not, I think, to be arrived at by toting up technical details, as bedazzling as these will doubtless be.Rather, it is to be gotten at by attention precisely to THE BIG PICTORE.The relational theory of meaning, and more broadly, the centrality in Situation Semantics of the "flow of information" -the view that that part of this flow that is mediated by the uses of language should be seen as "part and parcel of the general flow of information that uses natural meaning" -allows reasoned hope for a theoretical framework within which work in pragmatics ann one theory of speech acts, as well research in the theory of discourse, can find a proper place. In many of these areas, there is an abundance of insight, harvested from close descriptive analyses of a wide range of phenomena -a range hitherto hidden from both orthodox linguists and philosophers.There are now even glimmerings of regularities.But there has been no overarching theoretical structure within which to systematize these insights, and those scattered reguiaritles, and through which to relate them to the results of syntactic and formal semantic analyses.
null
null
null
null
Main paper: i introduction: The editors of the AI Journal recently hit upon the nice notion of correspondents' columns.The basic idea was to solicit experts in various fields, both within and outside of Artificial Intelligence, to provide "guidance to important, interesting current literature" in their fields. For Philosophy, they made the happy choice of Dan Dennett;for natural language processing, the equally happy choice of Barbara Grosz. Each has so far contributed one column, and these early contributions overlap in one, and as it happens, only one, particular;to wit: Situation~manties. Witness Dennett:""~ t ~oplcln " Cis] the hottest new philosophical loglc... [is] in some ways a successor or rival to Montague semantics.In recent work, Barwlse and Perry address the probZem [of what information from the context of an utterance affects which aspects of interpretation and how?] in the context of a proposed model theory of natural language, one that appears to be more compatible with the needs of AI than previous theories .... EI]t is of interest to work in natural-language processing for the kind of compositional semantics it ~ roposes, and the way in which it allows he contexts in which in an utterance is used to affect its interpretation.What is all the fuss about?I want to address this question, but rather indirectly.I want to situate situation semantics in "conceptual space" and draw some comparisons and contrasts between it and accounts in the style of Richard Montague.To this end, a few preliminary points are in order.First, as to the state of the Situation Semantics literature. There is as yet no published piece of the scope and detail of either "English as a Formal Language" or "The Proper Treatment of Ouantlficatlon in Ordinary English".Nor, of course, is there anything llke ~hat large body of work by philosophers and linguists -computational and otherwise -that has been produced from within the Montague paradigm. Montague's work was more or hess the first of its kind.It excited, quite justifiably, an extraordinary amount of interest t and has already inspired a distinguished body or work, some of it from within AI and Computational Linguistics.The latter can hardly be said for Moreover there is in the works a ceiiaboratlve effort, to be called Situations andS. m This will contain a "Fragment of Situation Semantics", a treatment of an extended fra~ent of ~.Last, for the moment, but not least, is a second book by Barwise and Perry, ~ ~, which will include a treatment of an even more extended fragment of English, together with a self-contalned treatment of the technical, mathematical background.(By "self-contalned".understand: not requiring either familiarity with or acceptance of The Big Picture ~ resented in S&A.) The bottom line: there is very Ittle of Situation Semantics presently available to the masses of hungry researchers.There are important points of similarity between Situation and Montague semantics, of course. One is that both are committed to formulating mathematically rigorous semantic accounts of English. To this end, both, of course, dip heavily into set theory.But this isn't saying a whole lot; for they deploy very different set theories. Montague, for a variety of technical reasons, was very fond of MKM, a very powerful theory, countenancing huge collections.MKM allows for both sets and (proper) classes, the latter being collections too big to be elements of other collections, and too big to be sets, say, of ZF. It also provides an unnervingly powerful comprehension axiom.B&P, on the other hand, have at least provisionally adopted KPU, a surprisingly weak set theory.Indeed, the vanilla version of KPU comes without an axiom of infinity and (more or less hence) has a model in the hereditarily finite sets.In that setting, even little infinite coliectlons, llke the universe of hereditarily finite sets, are proper classes, and beyond the pale.Enough for the moment of set theory, although we shall have to return to this strange land for one more brief visit.More important, and perhaps more disheartening, similarities are immediately to hand. Both Montague and B&P -thus far -restrict themselves to the declarative fragment of English; Montague, for the obvious reason that he was a model theorist and a student of Tarskl.For such types, the crucial notion to be explicated is that of "truth mThe collaborators being B&P, Robin Cooper, Hans Kamp, and Stanley Peters. of a sentence on an interpretation".Monta~e showed no interest in the use(s) of lar~Euage.Of course people working within his tradition are not debarred from doing so; but any such interest is an extra added attraction.The same point about model theory, broadly construed, holds for Barwlse-Perry as well; they certainly aren't syntaeticians.But in their case it is reinforced by philosophical considerations which point toward the use of language to convey information as the central use of language -hence, to assertlng as the central kind of utterance or speech act.Thus, even when they narrow their sights to this one use, the notion that language is something to be put to various uses by humans to further certain of their purposes is not foreign to Situation Semantics. • Second, B&P (again: so far) stop short at the awesome boundary of the period.Here again, this was only to be expected; and here again, the crucial question is whether their overall philosophical perspective so informs their account of natural language as to enable a more fruitful accommodation of work on various aspects of extended discourse.Barbara Grosz hints at a suspicion I share, that although at the moment much of what we have in this regard are promissory notes and wishfulthinking, the answer is in the affirmative, me attempting to apply (newly) orthodox mathematical techniques to the solution of classical problems in the semantics of natural languages, many of which had to do with intensional contexts. After all, these new techniques -in the development of which Montague played a role -had precisely to do with the treatment of formal languages containing modal and other intensional constructions.What made a fragment of English of interest to Montague, then, was that it contained loads of such contexts. It is as if all of that wondrous machinery, and the technical brilliance to deploy it. were aimed at an analysis of the following sentence: While the was ~ ~seemed to be lookln~ for ~ unicorn who was thinkinK ~ ~ centaur. What is astounding, of course, is that Montague should have been able to pull a systematic and rigorous treatment of such contexts out of the model-theoretlc hat.When we turn to Situation Semantics, on the other hand, we seem to be back in the linguistic world of flrst-grade readers: Spot ran. ~ saw ~run.Jane~ that SPot ran. rndee~, t~ malor concern of Barwise-Perry is not the semantics of natural language at all.They have bigger (well, different) fish to fry. First and foremost, they are concerned with sketching an account of the place of meaning and mind in the universe, an account that finds the source of meaning in nomic regularities among kinds of events (sltuatlons)L regularltles which, in general, are independent of Aar~uage anu mind.For the frying of said fish, a treatment of cognitive attitudes is essential. Moreover, and not independently, for any attempt to apply their overall philosophical picture to the semantics of natural language, the propositional attitude contexts pose a crucial and seemingly I"A Fragment of Situation Semantics, will contain a treatment of certain kinds of English interrogatives ; further out in the future, Situation ~ will contain such a more extensive treatment.eeBreaking out of the straightjacket of the sentence is the job of Situations in Discourse. insuperable obstacleo tee Hence the fact that the book ~ and Attitudes precedes Situation -the first lays the philosophical foundations for the second.Thus the origin of their concern even with the classical problems of the propositional attitudes is different from. though by no means incompatible with, that of Montague's. Something brief must now be said about ~-big picture. Here goes.The work of B&P can be seen as part of a continuing debate in philosophy about the source of the intentlonallty of the mental -and the nature of meaning in general; a debate about the right account to give of the phenomenon of one thing or event or state-of-affalrs being able to represent (carry information about) another thlr~ or event or state-of-affalrs.On one side stand those who see the phenomenon of Intentionallty as dependent on language -no representation without notation. This doctrine is the heart of current orthodoxy in both philosophy of mind and meta-theory of cognitive psychology.(See, by way of best example, [5] :) It is also a doctrine widely thought to oe presupposed b~ the whole endeavor of Artificial Intelligence.On another side are those who see the representational power of language as itself based on the intentlonallty of mlnd. It The striking thing about Barwise and Perry is that, while they stand firmly with those who deny that meaning and intentlonality essentially involve language, they reject the thesis that intentlonallty and meaning are essentlaliy mental or mind-lnvolvlng.The source of meaning and intentlonallty is to be found, rather, in the existence of lawllke regularities -constraints -among kinds of events. For Barwlse-Perry, the analysis of meaning begins with such facts as that: smoke means fire or those In~t mean measles.The ground of such facts lies e ways of the world; in the regularities between event types in virtue of which events of one type can carry information about events of other types.If semantics is the theory of meaning, then there is no pun intended in the application of semantic notions to situations in which there is no use of language and, indeed, in which there are no minds.Meaning's natural home is the world, for meaning arises out of the regular relations that hold among situations, among bits of reality.We believe linguistic meaning should be seen within this general picture of a world teeming with meaning, a world full of information for organisms appropriately attuned to that meaning, tieThere is yet another dimension to the philosophical debate, one to which Barwise "eeFor an important philosophical predecessor, see [~] . classification of (external) events as derivative .... A second approach is to focus on the external significance of language, on its connection with the described world rather than the describing mind. Sentences are classified not by the ideas they express, but by how they describe thlngs to be .... Frege adopted a third strategy. He postulated a third realm, a realm neither of ideas nor of worldly events, but of senses.Senses are the "philosopher's stone", the medium that coordinates all three elements in our equation: minds, words and objects.Hinds grasp senses, words express them, and objects are referred to by them .... One way of regarding the crucial notion of Intension in possible world semantlos is a development of Frege's notion of sense. [3] Barwlse and Perry clearly opt for the second approach. This is one reason for their concern with the problems posed by the propositional attitudes; for it has often been argued that these contexts doom any attempt at a theory of the second type. This is the burden of the dreaded "Sllngshot" -a weapon we shall ~aze at later. F?r the moment, though, I want simply to note ~ne connection of this dimension with that about the source and nature of intentionality.Just as (some particular features of) a particular X-ray carries information about the individual on which the machine was trained, e.g., that its leg is broken, so too does an utterance by the doctor of the sentence "It's bone is broken", in a context in which that same individual is what's referred to by • it".One can, of course, learn things about the X-ray and the X-ray machine as well as about the ~ oor patient; Just so, one can learn thlnEs about he doctor from her utterance.In both cases, the ~ ainlng of this ~ information is grounded n certain regularlties, in the one case mechanical, optical and electro-magnetic; in the other, perceptual, cognitive, and socialconventional.More to the point, in all cases the central locus of meaning is a relation, a regularity, between types of situation and the primary focus of significance is an external event or event-type. ~ Now, alas, for that return to set theory. I have studiously avoided telling the reader what situations,, events and/or event-~ypes are. Indeed, I haven't even said which, if any, of these are technical terms of Situation Semantics.Later I shall say enough (I hope) to generate an intuitive feel for situations; still, I have been speaklng freely of the centrality of relations between events or between event-types.Set-thecretlcallyspeaking, such relations are going to be (or be represented by) collections of ordered-palrs. C~llections, but not sets.These collections are proper classes relative to KPU; so, if thls be the last word on the matter, those very regularities so central to the account are not themselves available within the account -that is, they are not (represented by) set-theoretic constructs generated from the primitives by way of the resources of ~PU.For all such constructs are finite, me ~eedless to say, that isn't the last word on the matter.Still, this is scarcely the place for an extenced treatment of the issue; I raise it here simply to drive home a point about that first • Needless to say, we can talk about both minds and mental events and languages and linguistic events~ the key point is simply that a language user is not "really" always talklr~ first and foremost about his/her own mental state.We are not doomed to pathologlcal self-lnvolvement by being doomed to speak an d think.l.Assuming that we stick to an interpretation within the hereditarily finite sets, as we can. similarity between Montague and Situation Semantics.Montague wanted a very strong backEround theory within which models can be constructed precisely because he didn't want to have to worry about any (size) constraints on such models.B&Pput their money on a very weak set theory precisely because they want there to be such constraints; in particular because they want to erect a certain kind of barrier to the infinite. Obviously, large issues loom on the horizon; let's leave them there.I want now briefly to discuss 3 major aspects of Situation Semantics, aspects in which it differs fairly dramatically from Montague semantics. In passing.I will at least j~.. at the interrelationships among these, asloe from particular points of difference, remember that in the background there lurks a general conception of the use of language and its place in the overall scheme of things, a conception that is meant to inform and constrain detailed proposals.One other respect in which Barwise and Perry are orthodox is their acceptance of a form of the of , the principle that the meaning of a complex expression is a runctlon of the meanings of its constituents. This is the principle that is supposed to explain the proouctivity or generatlvity of languases, and the ability of finite creatures to master them. So, to adopt their favorite example. if Mitch now says to me, "You're dead wrong", what he says -what he asserts to be the case -is very different from what I would say if I were to utter the very same sentence directed at him. m" The very same sentence is used, "with the same meanir~"; but the message or information carried by its use differs.Moreover, the difference is systematically related to differences in the contexts in which the utterances are made.-Barwise and Perry take this phenomenon, often called indexlcality or token-reflexlvlty and all too often localized to the occurrence of particular words (e.g. t I , you , here , now , this , "that"), to oe of the essence of natural languages. They also note, however, that their relational account of meaning shows it to be a central feature of meaning in general.IT]hat smoke pouring out of the the window over there means that that particular building is on fire.Now the specific situation, smoke pouring out of that very building at that very time, will never be repeated.The next time we see smoke pouring out a building, it will be a new situation, and so will in one sense mean something else. It will mean that the building in the new situation is on fire at the new time. Each of these specific smoky situations means something, that the building then and there is on fire. This is...event meaning. The meaningful situations had something in common, the~ were of a co~n type, smoke pouring out o~ a building, a type that means fire. This is ...event-tYPe meanin~...What a particular case of smoke pouring out of a buildlng means, what it tells us about the mB&P choose to call such principles "semantic universals" -an unhappy choice, I think.JeWhlch, of course, ~ would never do.wider world, is determined by the meaning of smoke pouring out of a building and the particulars of this case of it. [3] Moreover, B&P contend that the fact that modern formal semantics grew out of a concern with the language(s) of matSematics has caused those working within the orthodox model-theoretic tradition either to ignore or to slight this crucial feature.*with the language of mathematics, and with the seemingly e6ernal nature of its sentences, led the founders of our field to neglect the efficiency of language.In our opinion this was a critical blunder, for efficiency lies at the very heart of meaning. Montague adopted a very narrow stance towards issues in pragmatics, concerning himself so*ely with indexicais and tense and not concerning himself at aii with other issues about the purposes of speakers and hearers and the corresponding uses of sentences. **e In addition, the treatment of formal pragmatics was to follow the lead of formal semantics:the central notion to be investigated was that of truth of a sentence, but now reiatlve to both an interpretation and a eontext of use or ~oint of reference.(See [10, 11, 12, 18] .) The working hypothesis" was that one could and should give a thoroughly uniform treatment of indexicallty within the model-theoretic framework deployed for the treatment of the indexlcal-free constructions. Thus, for example, in standard quantificational theory, one of the "parameters" of an interpretation is a domain or universe of discourse; in standard accounts of modal languages, another parameter is a set of possible worlds; in tense logics, a set of points of time.Why stop there?It is clear when we ~et to indexicals that the three parameters I've just mentioned aren't sufficient to determine a function to truth-values. Just think of two simultaneous utterances of "You are dead wrong" in the same world, with all other *Barbara Grosz hints at agreement with this Judgment. " [O] ne place that situation semantics is more compatible with efforts in natural-language processing than previous approaches [is tha£] context and facts about the world participate at two points: (I) in interpretation, for determining such things as who the speaker is, the time of utterance..;(2) in evaluation, for determining such things as..whether the relationships expressed in the utterance hold." **For the former, see [14] , see also [15] . m**Stalnaker is a wonderful example of someone working within the Montague tradition who does take the wider issues of praEmatics to heart. See [19] .things equal except speaker and addressee. In the interests of uniformity, stuff all such parameters into structures called points of reference, and who knows how many we'll need -see [9] , where points of reference are called indices.Then the meaning of a sentence is a function from points of references into truth values.A number of researchers working within the MontaEue tradition (in a sense there was ,,~ ucner) were unhappy with this particular result of Montague's quest for generality; the most important apostate being Kaplan. s There are complex technical issues involved in the apostasy, centrally those involving the interaction of indexical and intenslonal constructions -interactions which, at the very least, cast doubt on the doctrine that the intenslons of expressions are total functions from the set of points of reference to extensions of the expression at that point of reference.** The end result, anyway, is the proposal for some type of a non-unlform two-step account.Montaguesque points of reference should be broken in two, with posslbie worlds (and possibly, moments of time) playing one role and contexts of use (possibly inciudlng moments of time) another, different, role.In this scheme, sentences get associated with functions from contexts of use to propositions and these in turn are functions from contexts to truthvalues.Contexts, upon "application" to utterances of sentences, yield determinate propositions; worlds (world-times) function rather as points of evaluation,yielding truth values of determinate propositions.*** B&P, however, go beyond Kaplan's treatment, and in more than one direction. Cruclaily, the treatment of indexlcailty proper is only one aspect of the account of efficiency, in some ways, the least intriguing of the lot.Still, to drive home the first point: as it is with smoke pouring out of buildings, so too is it with sentences. The syntactic and semantic rules of a language, conventional regularities or constraints, determzne the meaning -the event-type meaning -of a sentence;features of the context of use of an utterance of that type get added in to determine what is actually said with that use. This is the event meaning of the utterance, also called its interpretation.Finally, that interpretation can be evaluated, either in a context which is essentially the same as the context of use, or some other; thereby yielding an evaluation of the utterance, (finally) a truth value.For B&P, the features of the context of use go beyond those associated with the presence of explicit indexical items in the utterance -people with personal pronouns, places with "locatives", times with tense markers and temporal indicators. In particular they mention two such parameters: speaker connections and resource situations. Some aspects of the former can be looked on as aspects of indexicality, following the lines of Kaplan It is a constraint they impose on themselves that they be able to account for significant regularities with respect to "the flow of information", in so far as that flow is mediated by the use of language and in cases where the information is not determined by a compositional semantic theory.And such cases are the norm. Compositionality holds only at the level of eventtype or linguistic meaning.The claim is that seeing linguistic meaning as a special case of the relational nature of meaning -that meaning resides in regularities between kinds of situations allows them to produce an account which satisfies this constraint.$9~" let me say something about proper names and some~nlng ease aoout resource situations.Let us put aside for the moment the semantic type that poor little "David Israel" gets assigned in [13] . Instead, we shall pretend that it gets associated with some individual." But which individual? Surely with one named "David Israel"; but there are bunches of such, and many, many more Davids.The probleml of course, is that proper names aren't proper. ~* Just as surely, at the level of linguistic meaning it makes no sense for me to ~ special treatment with respect to my name. Still, if you (or I) hear M_Itch Marcus. right after my talk, complaining to someone that "David is dead wrong", we'll know who's being maligned.Why so? Because we are aware of the speaker s connections; more finely, of the relevant connections in this instance.At the level of event-type or linguistic meaning, the contribution of a name is to refer to an individual of that name. **'e On the other hand, it is a feature of the context of use, that the speaker of an utterance containing that name is connected in certain ways to such and such individual~ of that name.Surely Mirth knows lots of Davids and we might find him saying "David thinks that David is really dead wrong". Of course, he ~ht be talking about someone inclined to harsh and "oSJectlve" self-crlticiam; ~robably not.Just one more thing about names and speaker connections.I noted above that for B&P, the interpretation of an utterance event underdetermines the information carried by that event.The use of names is a locus of nice examples of this. It is no part of the interpretation (event meaning) of Hitch's complaint about me that my name is "David"; but someone who saw him say this while he (Mitch, that is) was surreptitiously looking can learn that my name is "David", or even t~a~a{am the David Israel who gave the talk on Situation Semantics. Even without that, someone could learn that Mitch knows l is connected with) at least one person so named. Take a wild and wooly sentence such as "The dog is barking".Again, we want the denotations of such definite descriptions to be Just plain individuals; but again, which individuals?Surely, there is more than one dog in the world; does the definite description fail to refer because of non-uniqueness?Hardly; at the level of sentence meaning, there is no question of it's referring to some one individual dog.Rather we must introduce into our semantic account a ~ arameter for a set of resource situations.uppose, for " instance, that we have fixed a speaker, an audience and a (spatio-temporal) location of utterance of our sentence.These three are the main constituents of the parameter B&P call a discourse situation; note that this one parameter ~ retty much covers the contextual features ontague-Kaplan had in mind.Suppose also that a dog t otherwise unknown to our speaker and hls/her audience, just walked by the front porch, on which our protagonists are sitting.When the speaker utters the sentence he/she is exploiting a situation in which bo{h speaker and audience saw a lone dog stroll by; he/she is not describing either that particular recent situation or such a sltuation-type -there may have been many such; the two of them often sit out on that porch, the neighborhood is full of dogs.Rather, the speaker is referring to a situation in which that dog is barking.Which dog?. The one "contributed" by the resource situation; the one who just strolled by. It is an aspect of the linguistic meaning of a definite description that a resource situation should enter into the determination of its reference on a particular occasion of use; thus, an aspect of the meanings of sentences that a resource situation be a a parameter in the determination of the interpretations (event meanings) of sentential utterances.Moreover, one can imagine cases where what is of interest is precisely some feature of which resource situation a speaker is exploiting on a particular occasion.And here, too, as in the case of names or, more generally, of speaker connections, the claim is that the relational theory of meaning and the consequent emphasis on the centrality of the Principle of Efficiency give Situation Semantics a handle on a range of regularities connecting uses of languages with varieties of information that can be conveyed by such uses.As we have noted, Barwlse and Perry's treatment of efficiency goes beyond indexlcality and, as embedded within their overall account, goes well beyond a Kaplan-Montague theory. An important theme in this regard is the radical de-emphaslzlng of the role of entailment in their semantic theory and the correlative fixing on statements, not sentences, as the primary locus of interpretation. This is yet another way in which B&P go beyond Kaplan's forays beyond Montague. I have said that in standard (or even mildly deviant) model-theoretic accounts the key notion is that of truth on an interpretation, or in a model. Having said this, I might as well say that the key notion is that of entailment or logical consequence.A set of sentences S entails a sentence A iff there is no interpretation on which all of the sentences in S are true and A i3 false. From the purely model-theoretic point of view, this relation can be thought of as holding not between sentences, but between propositions (conceived of as the intenslons or meanings of sentences). For instance, it might be taken to hold between sets of possible worlds.Still, it is presumed (to put it mildly)that an important set of such relations among non-linguistic objects have syntactic realizations in relations holding among sentences which express those propositions.Moreover, that sentences stand in these relations is a function of certain specifiable aspects of their syntactic type -their "logical form".artificial, logical languages, this presumption of syntactic realization can be made more or less good; and anyway, the connections between, on the one hand, syntactic types and modes of composition, and semantic values on the other, must be made completely explicit.In particular, one specifies a set of expresslons as the logical constants of the language, specifies how to build up complex expressfons by the use of those constants, operating ultimately on the "non-logical constants", and then -ipso facto -one has a ~ erfectly usable and precise notion of loglcai orm.In the standard run of such artificial languages, sentences (that is: sentence types, there being no need for a notion of tokens) can be, and typically are, assigned truth-values as their semantic values.Such languages do not allow for indexicality; hence the talk about "eternal sentences".The linguistic meaning of such a sentence need not be distinguished from the ~roposltion expressed by a partlcular use of it.* unce Inuexicality is taken seriously, one can no longer attribute truth-values to senhences.(Note how this way of putting things suggests Just the unification of the treatment of indexlcallty with that of modality that appealed to Montague.) One can still, however, take as central the notion of a sentence being true in a context on an interpretation.The main reason for this move is that it allows one to develop a fairly standard notion of logical consequence or entailment at the level of sentences.Roughly, a set of sentences S entails a sentence A iff for every interpretation and for every context of use of that interpretation: if every sentence in S is true in a given context, then so too is A.are prepared to deemphaslze radically the notion of entaliment among sentences. As they fully realize they must provide a new notion -a notion of one statement following from another.At the very least then, our theory will seek to account for why the truth of certain ~ follows from the truth of other 9_~.This move has several important consequences...There is a lot of information available from utterances that is simply missed in traditional accounts, accounts that ignore the relational aspect of meaning...A semantic theory must go far beyond traditional "patterns of inference"...A rather startllng consequence of this is that there can be no syntactic counterpart, of the kind traditionally sought in proof theory and theories of loglcal form, to the semantic theory of consequence.For consequence is simply not a relation between purely syntactic elements. *Hence part, at least, of the oddity of talk about using such a language by uttering sentences thereof.What's at stake here?A whole lot, I fear. First, utterances -e.g., the makings of assertions -are actions.They are not linguistic items at all; they have no logical forms.Of course, they typically involve the production of linguistic tokens, which -by virtue of being of such and such types -may have such forms.(Typically, but not always -witness the shaking or nodding of a head, the winking of an eye, the pointing of a finger, all in appropriate contexts of use, of co,, ~e.) Thus, entailment relations among s~acements (utterances) can't be cashed in directly in terms of relations holding among sentences in virtue of special aspects of their syntactic shape.Remember what was said above about the main reason for opting out of an account based on statements and for an account based on sentence(type)-in-acontext.If you don't remember, let me (and David Kaplan) remind you:First, it is important to distinguish an utterance from a sentence-ln-~-context. The former notion is from the theory of speech acts, the latter from semantics. I Utterances take time, and utterances of distinct sentences can not be simultaneous (i.e., in the same context).But in order to develop a logic of demonstratives it seems most natural to be able to evaluate several premisses and a conclusion all in the same context. [8] . (The emphasis by way of underlining is mine -D.I.)A logic has to do with entailment and validity; these are the central semantic notions; sentences are their linguistic loci. This all sounds reasonable enough, except of course for that quite unmotivated presumption that contexts of use can't be spatio-temporally extended. And it seems correspondingly unreasonable when B&P opt out. IT]he ~ "Socrates is speaking" does not follow from the sentences "Every philosopher is speaking", "Socrates is a philosopher" even though this argument has the same "loglcal form" (on most accounts of logical form) as ["4 is an integral multiple of 2", "All integral multiples of 2 are even" (so) "4 is even".]In the first place, there is the matter of tense. At the very least the three sentences would have to be said at more or less the same time for the argument to be valid. Sentences are not true or false; only statements made with indicative sentences, utterances of certain kinds, are true or false.[3] (The example is mine -D.I.) B&P simplify somewhat.It is not required that all three sentences be uttered simultaneously (by one speaker).Roughly speaking, what is required is that the (spatio)temporal locations of their utterance be close together and that the "sum" of their locations overlap with that of some utterance of Socrates.But that isn't all.The speaker must be connected throughout to one and the same individual Socrates, else a pragmatic analogue of the fallacy of equivocation will result. The same (or something similar) could be said about the noun ~ hrase "every philosopher", for such phrases -just ike definite descriptions -require for their interpretation a resource situation.One can imagine a case wherein a given speaker, over a specified time and at a specified place, connected to one and the same guy named Socrates, exploits two different resource situations contributing two different groups of philosophers, one for each of *Thls is what is known in the trade as a stlpulatlve definition. the first two utterances.(The case is stronger, of course, if we substitute for the second sentence "Socrates is one of the philosophers.") It must certainly seem that too much of the baby is being tossed out with the water; but there are alleged to be (compensating?) gains:There is a lot of information available from utterances that is simply missed in traditional accounts, accounts that ignore the relational aspect of meaning. If someone comes up to me and says--Melanie saw a bear." I may learn not Just that Melanie saw a bear, but also that the speaker is somehow connected to Melanie in a way that allows him to refer to her using "Melanie".And I learn that the speaker is somehow in a position to have information about what Melanie saw.A semantic theory must go far beyond traditional "patterns of inference"to account for the external significance of language...A semantic theory must account for how language fits into the general flow of information.The capturing of entailments between statements is Just one aspect of a real theory of the information in an utterance.We think the relation theory of meaning provides the proper framework for such a theory.By looking at linguistic meanir~ as a relation between utterances and described situations, we can focus on the many coordinates that allow information to be extracted from utterances, information not only about the situation described'ni but also about the speaker and her place the world.[3]A. A ~U.t~ Ã Despite the heroic sentiments just expressed, B&P scarcely eschew sentences, a semantic account account of which they are, after all, aiming to provide.In the formal account statements get represented by n-tuples (of course), one element of which is the sentence uttered; and if you like. it is the sentence-under-syntactic-analysls.(This last bit is misleading, but not terribly.) Other elements of the tuple are a discourse situation and set of speaker connections and resource situations. Any%ray, there is the sentence.Given that, how about their logical form~q?Before touching on that issue, let me raise another and related feature of the account. This is the decision of B&P to let English sentences be the domain of their purely compositional semantic functions. For Montague, the "normal form" semantic interpretation of English went by way of a translation from English into some by now "fairly standard" logical language.(Such languages became fairly standard largely due to Montague's work.) Montague always claimed that thl3 was merely a pedagogical and simplifying device; and he provldeS an abstract account of how a "direct" semantic interpretation would go. Still, his practice leaves one with the taste of a search for hidden logical forms of a familiar type underlying the grammmtical forms of English sentences.No such intermediate logical language is forthcoming in Situation Semantics.First there is ALIASS:An Artificial Language for Illustrating Aspects of Situation Semantics... has more of the structure of English than any other artificial language we know, but it does not pretend to be a fragment of English, or any sort of "logical form" of English.It is Just what its name implies and nothing more.Next, and centrally, there is English. The decision to present a semantic theory of English directly may make the end product look even more different than it is.It certainly has the effect of depriving us of those familiar structures for which familiar "theorem provers" can be specified, and thus reinforces the sense of loss for seekers after a certain brand of entailments. Some may already feel the tell tale symptoms of withdrawal from an acute addiction.There is, however, more to it than that -or maybe the attendant liberation is enough. For instance, are English quantifiers logical constants, and if so, which ones?Which English quantlfiers correspond to which "formal" quantiflers? • Is there really a sententlai negation operator in English?Well, surely nit is not the case that" seems to qualify; but how about "not"? And how about conjunction?Consider, for example, a statement made with the sentence (I) Joe admires Sarah and she admires him. Let us confine our attention to the utterances in which (I) has the antecedent relations indicated by (I') Joe-1 admires Sarah-2 and she-2 admires him-1.While But this is not true of arbitrary statements.Moreover, as in the case above, if we have a [sic] conjunctive statement, there may be no coherent decomposition of it into two independent statements.Talk of conjunctive and especially of disjunctive statements is likely to be wildly misleading. For the latter suggests, quite wrongl[, that the utterer is either asserting one "dis3unct" or the other."A statement made using a disjunctive sentence is not the disjunction of two separate statements." ([3] .)In an appendix to "Situations and Attitudes", B&P suggest an analogue of propositional logic for statements within a very simple fragment of ALIASS. There is no (sentential) negation and no conditional; but more to the point, there are no unrestricted laws of statement entailment, e.g., between an arbitrary "conjunctive statement" and its two "conjunots".Things get even worse when we add complex noun phrases to the fragment.The mind boggles. That is, the only contribution made by a sentence, so embedded, to the whole can be its truth-value.In fact, the slingshot is not a "knockdown proof"; that it is not is recognized by many of its major slingers(?).(See, for instance, L16, 17].) Instead, in all of its forms, it rests on some form or other of two critical assumptions: Here, too, especially with respect to the second assumption, tricky technical issues about the treatment of singular terms -both simple and complex -in a standard logic with identity are involved.B&P purposefully ignore these issues. They are interested in English, not in sentences of a standard logic with identity; and anyway, those very same issues actually get "transformed" into precisely the issues about singular terms they do discuss, issues having to do with the distinction between referential and attributive uses of (complex) singular terms.(See their discussion in [2] and chapter 7 of [3].)To show my strength of character, I'm not going to discuss the sexy issue of transparency to substitution of singular terms except to say that, like Montague, B&P want a uniform treatment of singular terms as these occur both inside and outside of propositional attitude contexts; and that they also want to have it that the denotations of such terms are Just plain individual objects.(How perverse[) Rather, I want to look briefly at the first assumption about IThere is a class of exceptions to this, but I want not to get bogged down in details here. logical equivalence, i*With respect to the end-result, what's crucial is that B&P reject the alleged central consequence of the slingshot: that the primary semantic value of a sentence is its truth-value.Of course, given what we have already said, a better way to ~uc this is that for them, although statements are bearers of truth-values, the primary semantic value of a statement is not its truth value.honor is accorded to a collection of situations or events.Very roughly, the story goes like this: the syntactic and semantic rules of the language associate to each sentence type a type of situations or states-of-affalrs;intuitively, the type actualizations of which would be accurately, though partially, described by any statement made using the sentence.* Thus:Consider the sentence "I am sitting". Its meaning is, roughly, a relation that holds between an utterance ~ and a situation ~ Just in case there is a (spatio-temporal) location 1 and an individual ~, i is speaking at i. and in ~, is sitting at i .... The extension of this relation will be a larKe class of pairs of abstract situations.[3]-Now consider a particular utterance of that sentence, say by Mitch, at a specific location i'.any situation that has [Mitch] sitting at i' will be an interpretation of the utterance. An utterance usually describes lots of different situations, or at any rate partially describes them. Because of this, it is sometimes useful to think of the interpretation as the class of such situations.Then we can say that the situations appearing in the interpretation of our utterance vary greatly in how much they constrain the world...When uttered on a specific occasion, our sentence constrains the described situation to be a certain way, to be llke one of the situations in the interpretation.Or, one might say, it constrains the described situation to be one of the interpretations.[3] B. On Lo~IcalEcuivalence If the primary semantic value of a sentence is a collection or a type of situations, then it is not surprising that logically equivalent sentences -sentences true in the same models -might not have the same semantic values, and hence, might not mmOne point to make, though, is the following: the indexical personal pronouns are certainly singular terms. Frege's general line on the referential opacity of propositional attitude contexts certainly seems at its shakiest precisely in appiicatlon to such pronouns -and in general to indexical elements.And remember if B&P are right, there is an element of "indexicality" in the use of proper names.If Mitch believes that David is dead wrong and I'm (that) David, then Mitch believes that I'm dead wrong.If Mitch believes that I'm dead wrong and I am David Israel. then Mitch believes that (this) David Israel is wrong. [14, 15] ml should note that neither "situation" nor "event" is a technical term in Situation Semantics; though "event-type" is . be intersubstitutable salvo semantic value. Consider the two sentences: (I) Joe eats and (2) Joe eats, and Sarah sleeps or Sarah doesn't sleep. Let's grant that (I) and(2) are logically equivalent.But do they have the same "referent" or semantic value?If we think that sentences stand for situations..then we will not be at all inclined to accept the first principle required in the slingshot. The two logically equivalent sentences just do not have the same subject matter, they do not describe situations involving the same objects and properties.The first sentence will stand for all the situations in which Joe eats, the second sentence for those situations in which Joe eats and Sarah sleeps plus those in which Joe eats and Sarah doesn't sleep.Sarah is present in all of these.Since she is not present in may of the situations that "Joe eats" stands for, these sentences, though logically equivalent, do not stand for the same entity.(Obviously B&P are here ignoring the "indexlcality" inherent in proper uses of proper names -D.I.) [3] Notice that without so much as a glance in the direction of a single propositional attitude context, we can see how B&P can avoid certain wellknown troubles that plague the standard modeltheoretic treatments o~ such constructions.* Moreover and most importantly, they gain these fine powers of discrimination among "meanings" without following either Frege into a third realm of sense or Fodor (?) deep into the recesses of the mind. The significance of sentences, even as they occur in propositional attitude contexts, is out into the surrounding world, t*What's the bottom line?Clearly, it's too soon to say.Indeed, I assume many of you will simply want to wait until you can look at least at some treatment of some fragment of English. Others would llke as well to get some idea of how the project of Situation Semantics might be realized computationally.For instance, it is clear even from what little I've said that the semantic values of various kinds of expression types are going to be quite different from the norm and much thought will be needed to specify a formalism for representing and manipulating these representations adequately.Again, wouldn't it be nice to be told something at least about the metaphysics of Situation Semantics, about situations, abstract, actual, factual and real -all four types figure in some way in the account; about events, event-types, courses-of-events, schema, etc?Yes, it would be nice.Some, no doubt, were positively lusting after the scoop on how B&P handle the classic ~ uzzles of intensionality with respect to singular erms. And so on.All in good time.What I want to do, instead, is to end with a claim, Barbara Grosz's claim in fact, that At the moment, the bottom line with respect to Situation Semantics is not, I think, to be arrived at by toting up technical details, as bedazzling as these will doubtless be.Rather, it is to be gotten at by attention precisely to THE BIG PICTORE.The relational theory of meaning, and more broadly, the centrality in Situation Semantics of the "flow of information" -the view that that part of this flow that is mediated by the uses of language should be seen as "part and parcel of the general flow of information that uses natural meaning" -allows reasoned hope for a theoretical framework within which work in pragmatics ann one theory of speech acts, as well research in the theory of discourse, can find a proper place. In many of these areas, there is an abundance of insight, harvested from close descriptive analyses of a wide range of phenomena -a range hitherto hidden from both orthodox linguists and philosophers.There are now even glimmerings of regularities.But there has been no overarching theoretical structure within which to systematize these insights, and those scattered reguiaritles, and through which to relate them to the results of syntactic and formal semantic analyses. Appendix:
null
null
null
null
{ "paperhash": [ "cocchiarella|situations_and_attitudes.", "alston|knowledge_and_the_flow_of_information", "mcdermott|the_language_of_thought", "perry|the_problem_of_the_essential_indexical", "fodor|the_language_of_thought", "quine|the_ways_of_paradox,_and_other_essays" ], "title": [ "Situations and Attitudes.", "Knowledge and the Flow of Information", "THE LANGUAGE OF THOUGHT", "The Problem Of The Essential Indexical", "The Language of Thought", "The ways of paradox, and other essays" ], "abstract": [ "In this provocative book, Barwise and Perry tackle the slippery subject of \"meaning, \" a subject that has long vexed linguists, language philosophers, and logicians.", "Acknowledgments Preface 1. Communication theory 2. Communication and information 3. A semantic theory of information 4. Knowledge 5. The communication channel 6. Sensation and perception 7. Coding and content 8. The structure of belief 9. Concepts and meaning Notes Index.", "The role of linguistically structured representations in general intelligence.", "I once followed a trail of sugar on a supermarket floor, pushing my cart down the aisle on one side of a tall counter and back the aisle on the other, seeking the shopper with the torn sack to tell him he was making a mess. With each trip around the counter, the trail became thicker. But I seemed unable to catch up. Finally it dawned on me. I was the shopper I was trying to catch. I believed at the outset that the shopper with a torn sack was making a mess. And I was right. But I didn't believe that I was making a mess. That seems to be something I came to believe. And when I came to believe that, I stopped following the trail around the counter, and rearranged the torn sack in my cart. My change in beliefs seems to explain my change in behavior. My aim in this paper is to make a key point about the characterization of this change, and of beliefs in general. At first characterizing the change seems easy. My beliefs changed, didn't they, in that I came to have a new one, namely, that I am making a mess? But things are not so simple. The reason they are not is the importance of the word \"I\" in my expression of what I came to believe. When we replace it with other designations of me, we no longer have an explanation of my behavior and so, it seems, no longer an attribution of the same belief. It seems to be an essential indexical. But without such a replacement, all we have to identify the belief is the sentence \"I am making a mess\". But that sentence by itself doesn't seem to identify the crucial belief, for if someone else had said it, they would have expressed a different belief, a false one. I argue that the essential indexical poses a problem for various otherwise plausible accounts of belief. I first argue that it is a problem for the view that belief is a relation between subjects and propositions conceived as bearers of truth and", "In a compelling defense of the speculative approach to the philosophy of mind, Jerry Fodor argues that, while our best current theories of cognitive psychology view many higher processes as computational, computation itself presupposes an internal medium of representation. Fodor's prime concerns are to buttress the notion of internal representation from a philosophical viewpoint, and to determine those characteristics of this conceptual construct using the empirical data available from linguistics and cognitive psychology.", "The ways of paradox, and other essays , The ways of paradox, and other essays , کتابخانه دیجیتال و فن آوری اطلاعات دانشگاه امام صادق(ع)" ], "authors": [ { "name": [ "N. Cocchiarella", "J. Barwise", "J. Perry" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. Alston", "Fred I. Dretske" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. McDermott" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Perry" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Jerry A. Fodor" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. Quine" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null ], "s2_corpus_id": [ "124893762", "62769182", "26801195", "144213799", "272986577", "144755139" ], "intents": [ [], [], [], [], [], [] ], "isInfluential": [ false, false, false, false, false, false ] }
null
500
0.008
null
null
null
null
null
null
null
null
d886320cc431c57f12027584fdce3434bf4caf9e
5222302
null
Deterministic Parsing of Syntactic Non-fluencies
It is often remarked that natural language, used naturally, is unnaturally ungrammatical.* Spontaneous speech contains all manner of false starts, hesitations, and self-corrections that disrupt the well-formedness of strings. It is a mystery then, that despite this apparent wide deviation from grammatical norms, people have little difficx:lty understanding the non-fluent speech that is the essential medium of everyday life. And it is a still greater mystery that children can succeed in acquiring the grammar of a language on the basis of evidence provided by a mixed set of apparently grammatical and ungrammatical strings.
{ "name": [ "Hindle, Donald" ], "affiliation": [ null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
9
198
null
It is often remarked that natural language, used naturally, is unnaturally ungrammatical.* Spontaneous speech contains all manner of false starts, hesitations, and self-corrections that disrupt the well-formedness of strings. It is a mystery then, that despite this apparent wide deviation from grammatical norms, people have little difficx:lty understanding the non-fluent speech that is the essential medium of everyday life. And it is a still greater mystery that children can succeed in acquiring the grammar of a language on the basis of evidence provided by a mixed set of apparently grammatical and ungrammatical strings.In this paper I present a system of rules for resolving the non-fluencies of speech, implemented as part of a computational model of syntactic processing. The essential idea is that non-fluencies occur when a speaker corrects something that he or she has already said out loud. Since words once said cannot be unsaid, a speaker can only accomplish a self-correction by saying something additional --namely the intended words. The intended words are supposed to substitute for the wrongly produced words. For example, in sentence (1), the speaker initially said I but meant we.(1) I was--we were hungry.The problem for the hearer, as for any natural language understanding system, is to determine what words are to be expunged from the actual words said to find the intended sentence. Labov (1966) provided the key to solving this problem when he noted that a phonetic signal (specifically, a markedly abrupt cut-off of the speech signal) always marks the site where self-correction takes place. Of course, finding the site of a self-correction is only half the problem; it remains to specify what should be removed. A first guess suggests that this must be a non-deterministic problem, requiring complex reasoning about what the speaker meant to say. Labov claimed that a simple set of rules operating on the surface string would specify exactly what should be changed, transforming nearly all non-fluent strings into fully grammatical sentences. The specific set of transformational rules Labor proposed were not formally adequate, in part because they were surface transformations which ignored syntactic constituenthood. But his work forms the basis of this current analysis.Labor's claim was not of course that ungrammatical sentences are never produced in speech, for that clearly would be false. Rather, it seems that truly ungrammatical productions represent only a tiny fraction of the spoken output, and in the preponderance of cases, an apparent ungrammaticality can be resolved by simple editing rules. In order to make sense of non-fluent speech, it is essential that the various types of grammatical deviation be distinguished.point has sometimes been missed, and fundamentally different kinds of deviation from standard grammaticality have been treated together because they all present the same sort of problem for a natural language understanding system. For example, Hayes and Mouradian (1981) mix together speaker-initiated self-corrections with fragmentary sentences of all sorts: people often leave out or repeat words or phrases, break off what they are saying and rephrase or replace it, speak in fragments, or otherwise use incorrect grammar (1981:231).Ultimately, it will be fluent productions on are fully grammatical other. Although we characterization of essential to distinguish between nonthe one hand, and constructions that though not yet understood, on the may not know in detail the correct such processes as ellipsis and conjunction, they are without doubt fully productive grammatical processes. Without an understanding of the differences in the kinds of non-fluencies that occur, we are left with a kind of grab bag of grammatical deviation that can never be analyzed except by some sort of general purpose mechanisms.In this paper, I want to characterize the subset of spoken non-fluencies that can be treated as self-corrections, and to describe how they are handled in the context of a deterministic parser. I assume that a system for dealing with self-corrections similar to the one I describe must be a part of the competence of any natural language user. I will begin by discussing the range of non-fluencies that occur in speech. Then, after reviewing the notion of deterministic parsing, I will describe the model of parsing self-corrections in detail, and report results from a sample of 1500 sentences. Finally, I discuss some implications of this theory of self-correction, particularly for the problem of language acquisition.
null
The editing system that I will describe is implemented on top of a deterministic parser, called Fidditch. based on the processing principles proposed by Marcus (1980) . It takes as input a sentence of standard words and returns a labeled bracketing that represents the syntactic structure as an annotated tree structure. Fidditch was'designed to process transcripts of spontaneous speech, and to produce an analysis, partial if necessary, for a large corpus of interview transcripts. Because Jris a deterministic parser, it produces only one analysis for each sentence. When Fidditch is unable to build larger constituents out of subphrases, it moves on to the next constituent of the sentence.In brief, the parsing process proceeds as follows. The words in a transcribed sentence (where sentence means one tensed clause together with all subordinate clauses) are assigned a lexical category (or set of lexical categories) on the basis of a 2000 word lexicon and a morphological analyzer. The lexicon contains, for each word, a list of possible lexical categories, subcategorization information, and in a few cases, information on compound words. For example, the entry for round states that it is a noun, verb, adjective or preposition, that as a verb it is subcategorized for the movable particles out and up and for NP, and that it may be part of the compound adjective/preposition round about.Once the lexical analysis is complete, The phrase structure tree is constructed on the basis of pattern-action rules using two internal data structures: 1) a push-down stack of incomplete nodes, and 2) a buffer of complete constituents, into which the grammar rules can look through a window of three constituents. The parser matches rule patterns to the configuration of the window and stack. Its basic actions include --starting to build a new node by pushing a category onto the stack --attaching the first element of the window to the stack --dropping subtrees from the stack into the first position in the window when they are complete.The parser proceeds deterministically in the sense that no aspect of the tree structure, once built may be altered by any rule. (See Marcus 1980 for a comprehensive discussion of this theory of parsing.)
It is perhaps worth emphasizing that the mere fact that a parser does not handle a construction, or that linguists have not discussed it, does not mean that it is ungrammatical. In speech, there is a range of more or less unusual constructions which occur productively (some occur in writing as well), and which cannot be considered syntactically ill-formed. For example, (2a) I imagine there's a lot of them must have had some good reasons not to go there. (2b) That's the only thing he does is fight.Sentence 2ais an example of non-standard subject relative clauses that are common in speech. Sentence (2b), which seems to have two tensed "be" verbs in one clause is a productive sentence type that occurs regularly, though rarely, in all sorts of spoken discourse (see Kroch and Hindle 1981) .I assume that a correct and complete grammar for a parser will have to deal with all grammatical processes, marginal as well as central. I have nothing further to say about unusual constructions here.2. True Ungrammatical/ties. A small percentage of spoken utterances are truly ungrammatical. That is, they do not result from any regular grammatical process (however rare), nor are they instances of successful self-correction. Unexceptionable examples are hard to find, but the following give the flavor.(3a) I've seen it happen is two girls fight. (3b) Today if you beat a guy wants to blow your head off for something. (3c) And aa a lot of the kids that are from our neighborhood--there's one section that the kids aren't too--think they would usually--the--the ones that were the--the drop outs and the stoneheads.Labov (1966) reported that less that 2% of the sentences in a sample of a variety of types of conversational English were ungrammatical in this sense, a result that is confirmed by current work (Kroch and Hindle 1981).3. Self-corrected strings. This type of non-fluency is the focus of this paper. Self-corrected strings all have the characteristic that some extraneous material was apparently inserted, and that expunging some substring results in a well-formed syntactic structure, which is apparently consistent with the meaning that is intended. In the degenerate case, self-correction inserts non-lexical material, which the syntactic processor ignores, as in (4). The minimal non-lexical material that self-correction might insert is the editing signal itself. Other cases (examples 6-10 below) are only interpretable given the assumption that certain words, which are potentially part of the syntactic structure, are to be removed from the syntactic analysis.The status of the material that is corrected by self-correction and is expunged by the editing rules is somewhat odd. I use the term expunction to mean that it is removed from any further syntactic analysis. This does not mean however that a self-corrected string is unavailable for semantic processing. Although the self-corrected string is edited from the syntacti c analysis, it is nevertheless available for semantic interpretation. Jefferson (1974) discusses the example(5) ... [thuh] --[thiy] officer ...where the initial, self-corrected string (with the preconsonantal form of the rather than the pre-vocalic form) makes it clear that the speaker originally inteTided to refer to the police by some word other than officer.I should also note that the problems addressed by the self-correction component that I am concerned with are only part of the kind of deviance that occurs in natural language use. Many types of naturally occurring errors are not part of this system, for example, phonological and semantic errors. It is reasonable to hope that much of this dreck will be handled by similar subsystems. Of course, there will always remain errors that are outside of any system. But we expect that the apparent chaos is much more regular than it at first appears and that it can be modeled by the interaction of components that are themselves simple.In the following discussion, I use the terms selfcorrection and editing more or less interchangeably, though the two terms emphasize the generation and interpretation aspects of the same process.The self-correction rules specify how much, if anything, to expunge when an editing signal is detected. The rules depend crucially on being able to recognize an editing signal, for that marks the right edge of an expunction site. For the present discussion, I will assume little about the phonetic nature of the signal except that it is phonetically recognizable, and that, whatever their phonetic nature, all editing signals are, for the self-correction system, equivalent. Specifying the nature of the editing signal is, obviously, an area where further research is needed.The only action that the editing rules can perform is expunction, by which I mean removing an element from the view of the parser. The rules never replace one element with another or insert an element in the parser data structures. However, both replacements and insertions can be accomplished within the self-correction system by expunction of partially identical strings. For example, in (6) I am--I was really annoyed.The self-correction rules will expunge the I am which precedes the editing signal, thereby in effect replacing am with was and inserting really.Self-corrected strings can be viewed formally as having extra material inserted, but not involving either deletion or replacement of material. The linguistic system does seem to make use of both deletions and replacements in other subsystems of grammar however, namely in ellipsis and rank shift..As with the editing system, these are not errors but formal systems that interact with the central features of the syntax. True errors do of course occur involving all three logical possibilities (insertion, deletion, and replacement) but these are relatively rare.The self-correction rules have access to the internal data structures of the parser, and like the parser itself, they overate deterministicallv. The parser views the editing signal as occurring at the end of a constituent, because it marks the right edge of an expunged element. There are two types of editing rules in the system: expunction of copies, for which there are three rules, and lexically triggered restarts, for which there is one rule.The copying rules say that if you have two elements which are the same and they are separated by an editing signal, the first should be expunged from the structure. Obviously the trick here is to determine what counts as copies. There are three specific places where copy editing applies. SURFACE COPY EDITOR. This is essentially a nonsyntactic rule that matches the surface string on either side of the editing signal, and expunges the first copy. It applies to the surface string (i.e., for transcripts, the orthographic string) before any syntactic proct...i,~. For example, in (7), the underlined strings are expunged before parsing begins.(7a) Well if they'd--if they'd had a knife 1 wou--I wouldn't be here today. (Tb) lfthey--if they could do it.Typically, the Surface Copy Editor expunges a string of words that would later be analyzed as a constituent (or partial constituent), and would be expunged by the Category or the Stack Editors (as in 7a). However. the string that is expunged by the Surface Copy Editor need not be dominated by a single node; it can be a sequence of unrelated constituents. For example, in (7b) the parser will not analyze the first i/they as an SBAR node since there is no AUX node to trigger the start of a sentence, and therefore, the words will not be expunged by either the Category or the Stack editor. Such cases where ',he Surface Copy Editor must apply are rare, and it may therefore be that there exists an optimal parser grammar that would make the Surface Copy Editor redundant; all strings would be edited by the syntactically based Category and Stack Copy rules. However, it seems that the Surface Copy Editor must exist at some stage in the process of syntactic acquisition. The overlap between it and the other rules may be essential in iearning.COPY EDITOR. This copy editor matches syntactic constituents in the first two positions in the parser's buffer of complete constituents. When the first window position ends with an editing signal and the first and second constituents in the window are of the same type, the first is expunged. For example, in sentence (8) the first of two determiners separated by an editing signal is expunged and the first of two verbs is similarly expunged. (8) I was just that --the kind of guy that didn't have-like to have people worrying. STACK COPY EDITOR. If the first constituent in the window is preceded by an editing signal, the Stack Copy Editor looks into the stack for a constituent of the same type, and expunges any copy it finds there along with all descendants. (In the current implementation, the Stack Copy Editor is allowed to look at successive nodes in the stack, back to the first COMP node or attention shifting boundary. If it finds a copy, it expunges that copy along with any nodes that are at a shallower level in the stack. If Fidditch were allowed to attach of incomplete constituents, the Stack Copy Editor could be implemented to delete the copy only, without searching through the stack. The specifics of the implementation seems not to matter for this discussion of the editing rules.) In sentence (9), the initial embedded sentence is expunged by the Stack Copy Editor.(9) I think that you get--it's more strict in Catholic schools.It will be useful to look a little more closely at the operation of the parser to see the editing rules at work. Sentence (10) (10) I--the--the guys that I'm--was telling you about were.includes three editing signals which trigger the copy editors. (note also that the complement of were is ellipted.) I will show a trace of the parser at each of these correction stages.The first editor that comes into play is the Surface Copy Editor, which searches for identical strings on either side of an editing signal, and expunges the first copy. This is done once for each sentence, before any lexical category assignments are made. Thus in effect, the Surface Copy Editor corresponds to a phonetic/phonological matching operation, although it is in fact an orthographic procedure because we are dealing with transcriptions. Obviously, a full understanding of the self-correction system calls for detailed phonetic/phonological investigations.After the Surface Copy Editor has applied, the string that the lexical analyzer sees is (11) (11) I--the guys that I'm--was telling you about were. rather than (10). Lexical assignments are made, and the parser proceeds to build the tree structures. After some processing, the configuration of the data structures is that shown in Figure 1 . Before determining what next rule to apply, the two editing rules come into play, the Category Editor and the Stack Editor. At this pulse, the Stack Editor will apply because the first constituent in the window is the same (an AUX node) as the current active node, and the current node ends with an edit signal. As a result, the first window element is popped into another dimension, leaving the the parser data structures in the state shown in Figure 2 .Parsing of the sentence proceeds, and eventually reaches the state shown in Figure 3 . where the Stack Editor conditions are again met. The current active node and the first element in the window are both NPs, and the active node cads with an edit signal. This causes the current node to be expunged, leaving only a single NP node, the one in the window. The final analysis of the sentence, after some more processing is the tree shown in Figure 4 . I should reemphasize that the status of the edited elements is special. The copy editing rules remove a constituent, no matter how large, from the view of the parser. The parser continues as if those words had not been said. Although the expunged constituents may be available for semantic interpretation, they do not form part of the main predication. Figure 4 , The final analysis of sentence (10).A somewhat different sort of self-correction, less sensitive to syntactic structure and flagged not only bY the editing signal but also by a lexical item, is the restart. A restart triggers the expunction of all words from the edit signal back to the beginning of the sentence. It is signaled by a standard edit signal followed by a specific lexical item drawn from a set including well, ok. see, you know, like I said, etc. For example, (12a) That's the way if--well everybody was so stoned, anyway. (12b) But when l was young I went in--oh I was n'ineteen years old.It seems likely that, in addition to the lexical signals, specific intonational signals may also be involved in restarts.The editing system I have described has been applied to a corpus of over twenty hours of transcribed speech, in the process of using the parser to search for various syntactic constructions.Tht~ transcripts are of sociolinguistic interviews of the sort developed by Labor and designed to elicit unreflecting speech that approximates natural conversation." They are conversational interviews covering a range of topics, and they typically include considerable non-fluency. (Over half the sentences in one 90 minute interview contained at least one non-fluency).The transcriptions are in standard orthography, with sentence boundaries indicated. The alternation of speakers' turns is indicated, but overlap is not. Editing signals, when noted by the transcriber, are indicated in the transcripts with a double dash. It is clear that this approach to transcription only imperfectly reflects the phonetics of editing signals; we can't be sure to what extent the editing signals in our transcripts represent facts about production and to what extent they represent facts about perception. Nevertheless, except for a general tendency toward underrepresentation, there seems to be no systematic bias in our transcriptions of the editing signals, and therefore our findings are not likely to be undone by a better understanding of the phonetics of self-correction.One major problem in analyzing the syntax of English is the multiple category membership of words. In general, most decisions about category membership can be made on the basis of local context. However, by its nature, selfcorrection disrupts the local context, and therefore the disambiguation of lexical categories becomes a more difficult problem. It is not clear whether the rules for category disambiguation extend across an editing signal or not.The results I present depend on a successful disambiguation of the syntactic categories, though the algorithm to accomplish this is not completely specified. Thus, to test the self-correction routines I have, where necessary, imposed the proper category assignment. Table 1 shows the result of this editing system in the parsing of the interview transcripts from one speaker. All in all this shows the editing system to be quite successful in resolving non-fluencies.The interviews for this study were conducted by Tony Kroch and by Anne Bower.
Linguists have been of less help in describing the nature of spoken non-fluencies than might have been hoped; relatively little attention has been devoted to the actual performance of speakers, and studies that claim to be based on performance data seem to ignore the problem of nonfluencies. (Notable exceptions include Fromkin (1980) , and Thompson (1980) ). For the discussion of self-correction, I want to distinguish three types of non-fluencies that typically occur in speech.Although the editing rules for Fidditch are written as deterministic pattern-action rules of the same sort as the rules in the parsing grammar, their operation is in a sense isolable. The patterns of the self-correction rules are checked first, before any of the grammar rule patterns are checked, at each step in the parse.Despite this independence in terms of rule ordering, the operation of the self-correction component is closely tied to the grammar of the parser; for it is the parsing grammar that specifies what sort of constituents count as the same for copying. For example, if the grammar did not treat there as a noun phrase when it is subject of a sentence, the self-correction rules could not properly resolve a sentence like (13) People--there's a lot of people from Kennsington because the editing rules would never recognize that people and there are the same sort of element. (Note that (13) cannot be treated as a Restart because the lexical trigger is not present.) Thus, the observed pattern of self-correction introduces empirical constraints on the set of features that are available for syntactic rules.The self-correction rules impose constraints not only on what linguistic elements must count as the same, but also on what must count as different. For example, in sentence (14), could and be must be recognized as different sorts of elements in the grammar for the AUX node to be correctly resolved. If the grammar assigned the two words exactly the same part of speech, then the Category Cc'gy Editor would necessarily apply, incorrectly expunging could.(14) Kid could--be a brain in school.It appears therefore that the pattern of self-corrections that occur represents a potentially rich source of evidence about the nature of syntactic categories.Learnability. If the patterns of self-correction count as evidence about the nature of syntactic categories for the linguist, then this data must be equally available to the language learner. This would suggest that, far from being an impediment to language learning, non-fluencies may in fact facilitate language acquisition bv highlighting equivalent classes.This raises the general question of how children can acquire a language in the face of unrestrained non-fluency. How can a language learner sort out the grammatical from the ungrammatical strings? (The non-fluencies of speech are of course but one aspect of the degeneracy of input that makes language acquisition a puzzle.) The self-correction system I have described suggests that many non-fluent strings can be resolved with little detailed linguistic knowledge.As Table 1 shows, about a quarter of the editing signals result in expunction of only non-linguistic material. This requires only an ability to distinguish linguistic from nonlinguistic stuff, and it introduces the idea that edit signals signal an expunction site. Almost a third are resolved by the Surface Copying rule, which can be viewed simply as an instance of the general non-linguistic rule that multiple instances of the same thing count as a single instance. The category copying rules are generalizations of simple copying, applied to a knowledge of linguistic categories, Making the transition from surface copies to category copies is aided by the fact that there is considerable overlap in coverage, defining a path of expanding generalization. Thus at the earliest stages of learning, only the simplest, non-linguistic self-correction rules would come into play, and gradually the more syntactically integrated would be acquired.Contrast this self-correction system to an approach that handles non-fluencies by some general problem solving routines, for example Granger (1982) , who proposes reasoning from what a speaker might be expected to say. Besides the obvious inefficiencies of general problem solving approaches, it is worth giving special emphasis to the problem with learnability. A general problem solving approach depends crucially on evaluating the likelihood of possible deviations from the norms. But a language learner has by definition only partial and possibly incorrect knowledge of the syntax, and is therefore unable to consistently identify deviations from the grammatical system. With the editing system I describe, the learner need not have the ability to recognize deviations from grammatical norms, but merely the non-linguistic ability to recognize copies of the same thing.Generation. Thus far, I have considered the selfcorrection component from the standpoint of parsing. However, it is clear that the origins are in the process of generation. The mechanism for editing self-corrections that I have proposed has as its essential operation expunging one of two identical elements.It is unable to expunge a sequence of two elements. (The Surface Copy Editor might be viewed as a counterexample to this claim, but see below.) Consider expunction now from the standpoint of the generator. Suppose self-correction bears a one-to-one relationship to a possible action of the generator (initiated by some monitoring component) which could be called ABANDON CONSTRUCT X. And suppose that this action can be initiated at any time up until CONSTRUCT X is completed, when a signal is returned that the construction is complete. Further suppose that ABANDON CONSTRUCT X causes an editing signal. When the speaker decides in the middle of some linguistic element to abandon it and start again, an editing signal is produced.If this is an appropriate model, then the elements which are self-corrected should be exactly those elements that exist at some stage in the generation process. Thus, we should be able to find evidence for the units involved in generation by looking at the data of self-correction. And indeed, such evidence should be available to the language learner as well.I have described the nature of self-corrected speech (which is a major source of spoken non.fluencies) and how it can be resolved by simple editing rules within the context of a deterministic parser. Two features are essential to the self-correction system: I) every self-correction site (whether it results in the expunction of words or not) is marked by a phonetically identifiable signal placed at the right edge of the potential expunction site; and 2) the expunged part is the left-hand member of a pair of copies, one on each side of the editing signal. The copies may be of three types: 1) identical surface strings, which are edited by a matching rule that applies before syntactic analysis begins; 2) complete constituents, when two constituents of the same type appear in the parser's buffer; or 3) incomplete constituents, when the parser finds itself trying to complete a constituent of the same type as a constituent it has just completed. Whenever two such copies appear in such a configuration, and the first one ends with an editing signal, the first is expunged from further analysis. This editing system has been implemented as part of a deterministic parser, and tested on a wide range of sentences from transcribed speech. Further study of the self-correction system promises to provide insights into the units of production and the nature of linguistic categories.
Main paper: unusual constructions.: It is perhaps worth emphasizing that the mere fact that a parser does not handle a construction, or that linguists have not discussed it, does not mean that it is ungrammatical. In speech, there is a range of more or less unusual constructions which occur productively (some occur in writing as well), and which cannot be considered syntactically ill-formed. For example, (2a) I imagine there's a lot of them must have had some good reasons not to go there. (2b) That's the only thing he does is fight.Sentence 2ais an example of non-standard subject relative clauses that are common in speech. Sentence (2b), which seems to have two tensed "be" verbs in one clause is a productive sentence type that occurs regularly, though rarely, in all sorts of spoken discourse (see Kroch and Hindle 1981) .I assume that a correct and complete grammar for a parser will have to deal with all grammatical processes, marginal as well as central. I have nothing further to say about unusual constructions here.2. True Ungrammatical/ties. A small percentage of spoken utterances are truly ungrammatical. That is, they do not result from any regular grammatical process (however rare), nor are they instances of successful self-correction. Unexceptionable examples are hard to find, but the following give the flavor.(3a) I've seen it happen is two girls fight. (3b) Today if you beat a guy wants to blow your head off for something. (3c) And aa a lot of the kids that are from our neighborhood--there's one section that the kids aren't too--think they would usually--the--the ones that were the--the drop outs and the stoneheads.Labov (1966) reported that less that 2% of the sentences in a sample of a variety of types of conversational English were ungrammatical in this sense, a result that is confirmed by current work (Kroch and Hindle 1981).3. Self-corrected strings. This type of non-fluency is the focus of this paper. Self-corrected strings all have the characteristic that some extraneous material was apparently inserted, and that expunging some substring results in a well-formed syntactic structure, which is apparently consistent with the meaning that is intended. In the degenerate case, self-correction inserts non-lexical material, which the syntactic processor ignores, as in (4). The minimal non-lexical material that self-correction might insert is the editing signal itself. Other cases (examples 6-10 below) are only interpretable given the assumption that certain words, which are potentially part of the syntactic structure, are to be removed from the syntactic analysis.The status of the material that is corrected by self-correction and is expunged by the editing rules is somewhat odd. I use the term expunction to mean that it is removed from any further syntactic analysis. This does not mean however that a self-corrected string is unavailable for semantic processing. Although the self-corrected string is edited from the syntacti c analysis, it is nevertheless available for semantic interpretation. Jefferson (1974) discusses the example(5) ... [thuh] --[thiy] officer ...where the initial, self-corrected string (with the preconsonantal form of the rather than the pre-vocalic form) makes it clear that the speaker originally inteTided to refer to the police by some word other than officer.I should also note that the problems addressed by the self-correction component that I am concerned with are only part of the kind of deviance that occurs in natural language use. Many types of naturally occurring errors are not part of this system, for example, phonological and semantic errors. It is reasonable to hope that much of this dreck will be handled by similar subsystems. Of course, there will always remain errors that are outside of any system. But we expect that the apparent chaos is much more regular than it at first appears and that it can be modeled by the interaction of components that are themselves simple.In the following discussion, I use the terms selfcorrection and editing more or less interchangeably, though the two terms emphasize the generation and interpretation aspects of the same process. errors in spontaneous speech: Linguists have been of less help in describing the nature of spoken non-fluencies than might have been hoped; relatively little attention has been devoted to the actual performance of speakers, and studies that claim to be based on performance data seem to ignore the problem of nonfluencies. (Notable exceptions include Fromkin (1980) , and Thompson (1980) ). For the discussion of self-correction, I want to distinguish three types of non-fluencies that typically occur in speech. the parser: The editing system that I will describe is implemented on top of a deterministic parser, called Fidditch. based on the processing principles proposed by Marcus (1980) . It takes as input a sentence of standard words and returns a labeled bracketing that represents the syntactic structure as an annotated tree structure. Fidditch was'designed to process transcripts of spontaneous speech, and to produce an analysis, partial if necessary, for a large corpus of interview transcripts. Because Jris a deterministic parser, it produces only one analysis for each sentence. When Fidditch is unable to build larger constituents out of subphrases, it moves on to the next constituent of the sentence.In brief, the parsing process proceeds as follows. The words in a transcribed sentence (where sentence means one tensed clause together with all subordinate clauses) are assigned a lexical category (or set of lexical categories) on the basis of a 2000 word lexicon and a morphological analyzer. The lexicon contains, for each word, a list of possible lexical categories, subcategorization information, and in a few cases, information on compound words. For example, the entry for round states that it is a noun, verb, adjective or preposition, that as a verb it is subcategorized for the movable particles out and up and for NP, and that it may be part of the compound adjective/preposition round about.Once the lexical analysis is complete, The phrase structure tree is constructed on the basis of pattern-action rules using two internal data structures: 1) a push-down stack of incomplete nodes, and 2) a buffer of complete constituents, into which the grammar rules can look through a window of three constituents. The parser matches rule patterns to the configuration of the window and stack. Its basic actions include --starting to build a new node by pushing a category onto the stack --attaching the first element of the window to the stack --dropping subtrees from the stack into the first position in the window when they are complete.The parser proceeds deterministically in the sense that no aspect of the tree structure, once built may be altered by any rule. (See Marcus 1980 for a comprehensive discussion of this theory of parsing.) the serf-correction rules: The self-correction rules specify how much, if anything, to expunge when an editing signal is detected. The rules depend crucially on being able to recognize an editing signal, for that marks the right edge of an expunction site. For the present discussion, I will assume little about the phonetic nature of the signal except that it is phonetically recognizable, and that, whatever their phonetic nature, all editing signals are, for the self-correction system, equivalent. Specifying the nature of the editing signal is, obviously, an area where further research is needed.The only action that the editing rules can perform is expunction, by which I mean removing an element from the view of the parser. The rules never replace one element with another or insert an element in the parser data structures. However, both replacements and insertions can be accomplished within the self-correction system by expunction of partially identical strings. For example, in (6) I am--I was really annoyed.The self-correction rules will expunge the I am which precedes the editing signal, thereby in effect replacing am with was and inserting really.Self-corrected strings can be viewed formally as having extra material inserted, but not involving either deletion or replacement of material. The linguistic system does seem to make use of both deletions and replacements in other subsystems of grammar however, namely in ellipsis and rank shift..As with the editing system, these are not errors but formal systems that interact with the central features of the syntax. True errors do of course occur involving all three logical possibilities (insertion, deletion, and replacement) but these are relatively rare.The self-correction rules have access to the internal data structures of the parser, and like the parser itself, they overate deterministicallv. The parser views the editing signal as occurring at the end of a constituent, because it marks the right edge of an expunged element. There are two types of editing rules in the system: expunction of copies, for which there are three rules, and lexically triggered restarts, for which there is one rule.The copying rules say that if you have two elements which are the same and they are separated by an editing signal, the first should be expunged from the structure. Obviously the trick here is to determine what counts as copies. There are three specific places where copy editing applies. SURFACE COPY EDITOR. This is essentially a nonsyntactic rule that matches the surface string on either side of the editing signal, and expunges the first copy. It applies to the surface string (i.e., for transcripts, the orthographic string) before any syntactic proct...i,~. For example, in (7), the underlined strings are expunged before parsing begins.(7a) Well if they'd--if they'd had a knife 1 wou--I wouldn't be here today. (Tb) lfthey--if they could do it.Typically, the Surface Copy Editor expunges a string of words that would later be analyzed as a constituent (or partial constituent), and would be expunged by the Category or the Stack Editors (as in 7a). However. the string that is expunged by the Surface Copy Editor need not be dominated by a single node; it can be a sequence of unrelated constituents. For example, in (7b) the parser will not analyze the first i/they as an SBAR node since there is no AUX node to trigger the start of a sentence, and therefore, the words will not be expunged by either the Category or the Stack editor. Such cases where ',he Surface Copy Editor must apply are rare, and it may therefore be that there exists an optimal parser grammar that would make the Surface Copy Editor redundant; all strings would be edited by the syntactically based Category and Stack Copy rules. However, it seems that the Surface Copy Editor must exist at some stage in the process of syntactic acquisition. The overlap between it and the other rules may be essential in iearning.COPY EDITOR. This copy editor matches syntactic constituents in the first two positions in the parser's buffer of complete constituents. When the first window position ends with an editing signal and the first and second constituents in the window are of the same type, the first is expunged. For example, in sentence (8) the first of two determiners separated by an editing signal is expunged and the first of two verbs is similarly expunged. (8) I was just that --the kind of guy that didn't have-like to have people worrying. STACK COPY EDITOR. If the first constituent in the window is preceded by an editing signal, the Stack Copy Editor looks into the stack for a constituent of the same type, and expunges any copy it finds there along with all descendants. (In the current implementation, the Stack Copy Editor is allowed to look at successive nodes in the stack, back to the first COMP node or attention shifting boundary. If it finds a copy, it expunges that copy along with any nodes that are at a shallower level in the stack. If Fidditch were allowed to attach of incomplete constituents, the Stack Copy Editor could be implemented to delete the copy only, without searching through the stack. The specifics of the implementation seems not to matter for this discussion of the editing rules.) In sentence (9), the initial embedded sentence is expunged by the Stack Copy Editor.(9) I think that you get--it's more strict in Catholic schools.It will be useful to look a little more closely at the operation of the parser to see the editing rules at work. Sentence (10) (10) I--the--the guys that I'm--was telling you about were.includes three editing signals which trigger the copy editors. (note also that the complement of were is ellipted.) I will show a trace of the parser at each of these correction stages.The first editor that comes into play is the Surface Copy Editor, which searches for identical strings on either side of an editing signal, and expunges the first copy. This is done once for each sentence, before any lexical category assignments are made. Thus in effect, the Surface Copy Editor corresponds to a phonetic/phonological matching operation, although it is in fact an orthographic procedure because we are dealing with transcriptions. Obviously, a full understanding of the self-correction system calls for detailed phonetic/phonological investigations.After the Surface Copy Editor has applied, the string that the lexical analyzer sees is (11) (11) I--the guys that I'm--was telling you about were. rather than (10). Lexical assignments are made, and the parser proceeds to build the tree structures. After some processing, the configuration of the data structures is that shown in Figure 1 . Before determining what next rule to apply, the two editing rules come into play, the Category Editor and the Stack Editor. At this pulse, the Stack Editor will apply because the first constituent in the window is the same (an AUX node) as the current active node, and the current node ends with an edit signal. As a result, the first window element is popped into another dimension, leaving the the parser data structures in the state shown in Figure 2 .Parsing of the sentence proceeds, and eventually reaches the state shown in Figure 3 . where the Stack Editor conditions are again met. The current active node and the first element in the window are both NPs, and the active node cads with an edit signal. This causes the current node to be expunged, leaving only a single NP node, the one in the window. The final analysis of the sentence, after some more processing is the tree shown in Figure 4 . I should reemphasize that the status of the edited elements is special. The copy editing rules remove a constituent, no matter how large, from the view of the parser. The parser continues as if those words had not been said. Although the expunged constituents may be available for semantic interpretation, they do not form part of the main predication. Figure 4 , The final analysis of sentence (10).A somewhat different sort of self-correction, less sensitive to syntactic structure and flagged not only bY the editing signal but also by a lexical item, is the restart. A restart triggers the expunction of all words from the edit signal back to the beginning of the sentence. It is signaled by a standard edit signal followed by a specific lexical item drawn from a set including well, ok. see, you know, like I said, etc. For example, (12a) That's the way if--well everybody was so stoned, anyway. (12b) But when l was young I went in--oh I was n'ineteen years old.It seems likely that, in addition to the lexical signals, specific intonational signals may also be involved in restarts. a sample: The editing system I have described has been applied to a corpus of over twenty hours of transcribed speech, in the process of using the parser to search for various syntactic constructions.Tht~ transcripts are of sociolinguistic interviews of the sort developed by Labor and designed to elicit unreflecting speech that approximates natural conversation." They are conversational interviews covering a range of topics, and they typically include considerable non-fluency. (Over half the sentences in one 90 minute interview contained at least one non-fluency).The transcriptions are in standard orthography, with sentence boundaries indicated. The alternation of speakers' turns is indicated, but overlap is not. Editing signals, when noted by the transcriber, are indicated in the transcripts with a double dash. It is clear that this approach to transcription only imperfectly reflects the phonetics of editing signals; we can't be sure to what extent the editing signals in our transcripts represent facts about production and to what extent they represent facts about perception. Nevertheless, except for a general tendency toward underrepresentation, there seems to be no systematic bias in our transcriptions of the editing signals, and therefore our findings are not likely to be undone by a better understanding of the phonetics of self-correction.One major problem in analyzing the syntax of English is the multiple category membership of words. In general, most decisions about category membership can be made on the basis of local context. However, by its nature, selfcorrection disrupts the local context, and therefore the disambiguation of lexical categories becomes a more difficult problem. It is not clear whether the rules for category disambiguation extend across an editing signal or not.The results I present depend on a successful disambiguation of the syntactic categories, though the algorithm to accomplish this is not completely specified. Thus, to test the self-correction routines I have, where necessary, imposed the proper category assignment. Table 1 shows the result of this editing system in the parsing of the interview transcripts from one speaker. All in all this shows the editing system to be quite successful in resolving non-fluencies.The interviews for this study were conducted by Tony Kroch and by Anne Bower. discussion: Although the editing rules for Fidditch are written as deterministic pattern-action rules of the same sort as the rules in the parsing grammar, their operation is in a sense isolable. The patterns of the self-correction rules are checked first, before any of the grammar rule patterns are checked, at each step in the parse.Despite this independence in terms of rule ordering, the operation of the self-correction component is closely tied to the grammar of the parser; for it is the parsing grammar that specifies what sort of constituents count as the same for copying. For example, if the grammar did not treat there as a noun phrase when it is subject of a sentence, the self-correction rules could not properly resolve a sentence like (13) People--there's a lot of people from Kennsington because the editing rules would never recognize that people and there are the same sort of element. (Note that (13) cannot be treated as a Restart because the lexical trigger is not present.) Thus, the observed pattern of self-correction introduces empirical constraints on the set of features that are available for syntactic rules.The self-correction rules impose constraints not only on what linguistic elements must count as the same, but also on what must count as different. For example, in sentence (14), could and be must be recognized as different sorts of elements in the grammar for the AUX node to be correctly resolved. If the grammar assigned the two words exactly the same part of speech, then the Category Cc'gy Editor would necessarily apply, incorrectly expunging could.(14) Kid could--be a brain in school.It appears therefore that the pattern of self-corrections that occur represents a potentially rich source of evidence about the nature of syntactic categories.Learnability. If the patterns of self-correction count as evidence about the nature of syntactic categories for the linguist, then this data must be equally available to the language learner. This would suggest that, far from being an impediment to language learning, non-fluencies may in fact facilitate language acquisition bv highlighting equivalent classes.This raises the general question of how children can acquire a language in the face of unrestrained non-fluency. How can a language learner sort out the grammatical from the ungrammatical strings? (The non-fluencies of speech are of course but one aspect of the degeneracy of input that makes language acquisition a puzzle.) The self-correction system I have described suggests that many non-fluent strings can be resolved with little detailed linguistic knowledge.As Table 1 shows, about a quarter of the editing signals result in expunction of only non-linguistic material. This requires only an ability to distinguish linguistic from nonlinguistic stuff, and it introduces the idea that edit signals signal an expunction site. Almost a third are resolved by the Surface Copying rule, which can be viewed simply as an instance of the general non-linguistic rule that multiple instances of the same thing count as a single instance. The category copying rules are generalizations of simple copying, applied to a knowledge of linguistic categories, Making the transition from surface copies to category copies is aided by the fact that there is considerable overlap in coverage, defining a path of expanding generalization. Thus at the earliest stages of learning, only the simplest, non-linguistic self-correction rules would come into play, and gradually the more syntactically integrated would be acquired.Contrast this self-correction system to an approach that handles non-fluencies by some general problem solving routines, for example Granger (1982) , who proposes reasoning from what a speaker might be expected to say. Besides the obvious inefficiencies of general problem solving approaches, it is worth giving special emphasis to the problem with learnability. A general problem solving approach depends crucially on evaluating the likelihood of possible deviations from the norms. But a language learner has by definition only partial and possibly incorrect knowledge of the syntax, and is therefore unable to consistently identify deviations from the grammatical system. With the editing system I describe, the learner need not have the ability to recognize deviations from grammatical norms, but merely the non-linguistic ability to recognize copies of the same thing.Generation. Thus far, I have considered the selfcorrection component from the standpoint of parsing. However, it is clear that the origins are in the process of generation. The mechanism for editing self-corrections that I have proposed has as its essential operation expunging one of two identical elements.It is unable to expunge a sequence of two elements. (The Surface Copy Editor might be viewed as a counterexample to this claim, but see below.) Consider expunction now from the standpoint of the generator. Suppose self-correction bears a one-to-one relationship to a possible action of the generator (initiated by some monitoring component) which could be called ABANDON CONSTRUCT X. And suppose that this action can be initiated at any time up until CONSTRUCT X is completed, when a signal is returned that the construction is complete. Further suppose that ABANDON CONSTRUCT X causes an editing signal. When the speaker decides in the middle of some linguistic element to abandon it and start again, an editing signal is produced.If this is an appropriate model, then the elements which are self-corrected should be exactly those elements that exist at some stage in the generation process. Thus, we should be able to find evidence for the units involved in generation by looking at the data of self-correction. And indeed, such evidence should be available to the language learner as well.I have described the nature of self-corrected speech (which is a major source of spoken non.fluencies) and how it can be resolved by simple editing rules within the context of a deterministic parser. Two features are essential to the self-correction system: I) every self-correction site (whether it results in the expunction of words or not) is marked by a phonetically identifiable signal placed at the right edge of the potential expunction site; and 2) the expunged part is the left-hand member of a pair of copies, one on each side of the editing signal. The copies may be of three types: 1) identical surface strings, which are edited by a matching rule that applies before syntactic analysis begins; 2) complete constituents, when two constituents of the same type appear in the parser's buffer; or 3) incomplete constituents, when the parser finds itself trying to complete a constituent of the same type as a constituent it has just completed. Whenever two such copies appear in such a configuration, and the first one ends with an editing signal, the first is expunged from further analysis. This editing system has been implemented as part of a deterministic parser, and tested on a wide range of sentences from transcribed speech. Further study of the self-correction system promises to provide insights into the units of production and the nature of linguistic categories. : It is often remarked that natural language, used naturally, is unnaturally ungrammatical.* Spontaneous speech contains all manner of false starts, hesitations, and self-corrections that disrupt the well-formedness of strings. It is a mystery then, that despite this apparent wide deviation from grammatical norms, people have little difficx:lty understanding the non-fluent speech that is the essential medium of everyday life. And it is a still greater mystery that children can succeed in acquiring the grammar of a language on the basis of evidence provided by a mixed set of apparently grammatical and ungrammatical strings.In this paper I present a system of rules for resolving the non-fluencies of speech, implemented as part of a computational model of syntactic processing. The essential idea is that non-fluencies occur when a speaker corrects something that he or she has already said out loud. Since words once said cannot be unsaid, a speaker can only accomplish a self-correction by saying something additional --namely the intended words. The intended words are supposed to substitute for the wrongly produced words. For example, in sentence (1), the speaker initially said I but meant we.(1) I was--we were hungry.The problem for the hearer, as for any natural language understanding system, is to determine what words are to be expunged from the actual words said to find the intended sentence. Labov (1966) provided the key to solving this problem when he noted that a phonetic signal (specifically, a markedly abrupt cut-off of the speech signal) always marks the site where self-correction takes place. Of course, finding the site of a self-correction is only half the problem; it remains to specify what should be removed. A first guess suggests that this must be a non-deterministic problem, requiring complex reasoning about what the speaker meant to say. Labov claimed that a simple set of rules operating on the surface string would specify exactly what should be changed, transforming nearly all non-fluent strings into fully grammatical sentences. The specific set of transformational rules Labor proposed were not formally adequate, in part because they were surface transformations which ignored syntactic constituenthood. But his work forms the basis of this current analysis.Labor's claim was not of course that ungrammatical sentences are never produced in speech, for that clearly would be false. Rather, it seems that truly ungrammatical productions represent only a tiny fraction of the spoken output, and in the preponderance of cases, an apparent ungrammaticality can be resolved by simple editing rules. In order to make sense of non-fluent speech, it is essential that the various types of grammatical deviation be distinguished.point has sometimes been missed, and fundamentally different kinds of deviation from standard grammaticality have been treated together because they all present the same sort of problem for a natural language understanding system. For example, Hayes and Mouradian (1981) mix together speaker-initiated self-corrections with fragmentary sentences of all sorts: people often leave out or repeat words or phrases, break off what they are saying and rephrase or replace it, speak in fragments, or otherwise use incorrect grammar (1981:231).Ultimately, it will be fluent productions on are fully grammatical other. Although we characterization of essential to distinguish between nonthe one hand, and constructions that though not yet understood, on the may not know in detail the correct such processes as ellipsis and conjunction, they are without doubt fully productive grammatical processes. Without an understanding of the differences in the kinds of non-fluencies that occur, we are left with a kind of grab bag of grammatical deviation that can never be analyzed except by some sort of general purpose mechanisms.In this paper, I want to characterize the subset of spoken non-fluencies that can be treated as self-corrections, and to describe how they are handled in the context of a deterministic parser. I assume that a system for dealing with self-corrections similar to the one I describe must be a part of the competence of any natural language user. I will begin by discussing the range of non-fluencies that occur in speech. Then, after reviewing the notion of deterministic parsing, I will describe the model of parsing self-corrections in detail, and report results from a sample of 1500 sentences. Finally, I discuss some implications of this theory of self-correction, particularly for the problem of language acquisition. Appendix:
null
null
null
null
{ "paperhash": [ "thompson|linguistic_analysis_of_natural_language_communication_with_computers", "hayes|flexible_parsing", "marcus|a_theory_of_syntactic_recognition_for_natural_language", "jefferson|error_correction_as_an_interactional_resource", "granger|understanding_:_design_and_implementation_of_'_tolerant_'_understanders" ], "title": [ "Linguistic Analysis of Natural Language Communication With Computers", "Flexible Parsing", "A theory of syntactic recognition for natural language", "Error correction as an interactional resource", "Understanding : Design and Implementation of ' Tolerant ' Understanders" ], "abstract": [ "Interaction with computers in natural \nlanguage requires a language that is flexible \nand suited to the task. This study of natural \ndialogue was undertaken to reveal those characteristics \nwhich can make computer English more \nnatural. Experiments were made in three modes \nof communication: face-to-face, terminal-to-terminal \nand human-to-computer, involving over \n80 subjects, over 80,000 words and over 50 \nhours. They showed some striking similarities, \nespecially in sentence length and proportion of \nwords in sentences. The three modes also share \nthe use of fragments, typical of dialogue. \nDetailed statistical analysis and comparisons \nare given. The nature and relative frequency of \nfragments, which have been classified into \ntwelve categories, is shown in all modes. Special \ncharacteristics of the face-to-face mode \nare due largely to these fragments (which \ninclude phatics employed to keep the channel of \ncommunication open). Special characteristics of \nthe computational mode include other fragments, \nnamely definitions, which are absent from other \nmodes. Inclusion of fragments in computational \ngrammar is considered a major factor in improving \ncomputer naturalness. \n \nThe majority of experiments involved a real \nlife task of loading Navy cargo ships. The \npeculiarities of face-to-face mode were similar \nin this task to results of earlier experiments \ninvolving another task. It was found that in \ntask oriented situations the syntax of interactions \nis influenced in all modes by this context \nin the direction of simplification, resulting in \nshort sentences (about 7 words long). Users \nseek to maximize efficiency In solving the problem. \nWhen given a chance, in the computational \nmode, to utilize special devices facilitating \nthe solution of the problem, they all resort to \nthem. \n \nAnalyses of the special characteristics of \nthe computational mode, including the analysis \nof the subjects\" errors, provide guidance for \nthe improvement of the habitability of such systems. \nThe availability of the REL System, a \nhigh performance natural language system, made \nthe experiments possible and meaningful. The \nindicated improvements in habitability are now \nbeing embodied in the POL (Problem Oriented \nLanguage) System, a successor to REL.", "When people use natural language in natural settings, they often use it ungrammatically, missing out or repeating words, breaking-off and restarting, speaking in fragments, etc., Their human listeners are usually able to cope with these deviations with little difficulty. If a computer system wishes to accept natural language input from its users on a routine basis, it must display a similar indifference. In this paper, we outline a set of parsing flexibilities that such a system should provide. We go on to describe FlexP. a bottom-up pattern-matching parser that we have designed and implemented to provide these flexibilities for restricted natural language input to a limited-domain computer system.", "Abstract : Assume that the syntax of natural language can be parsed by a left-to-right deterministic mechanism without facilities for parallelism or backup. It will be shown that this 'determinism' hypothesis, explored within the context of the grammar of English, leads to a simple mechanism, a grammar interpreter. (Author)", "ABSTRACT This paper considers some small errors which occur in natural talk, treating them as matters of competence, both in the production of coherent speech and the conduct of meaningful interaction. Focusing on a rule-governed occurrence of the interjection ‘uh’, a format is described by which one can display that one is correcting an error one almost, but did not, produce. It is argued that there are systematic ways in which someone who hears such talk can find that an error was almost made and what that error would have been. Two broad classes of error are considered, both of which can be announced by and extracted from the occurrence of an error correction format. These are ‘production’ errors; i.e. a range of troubles one encounters in the attempt to produce coherent, grammatically correct speech, and ‘interactional’ errors; i.e. mistakes one might make in the attempt to speak appropriately to some co-participant(s) and/or within some situation. Focusing on interactional errors, it is proposed that the error correction format (and other formats for events other than error) can be used to invoke alternatives to some current formulation of self and other(s), situation and relationship, and thereby serve as a resource for negotiating and perhaps reformulating a current set of identities. (Conversational analysis, discourse devices (metalinguistic, attitudinal markers), U.S. English.)", "Most large text-understanding systems have been designed under the assumption that the input text will be in reasonably \"neat\" form, e.g., newspaper stories and other edited texts. However, a great deal of natural language texts e.g.~ memos, rough drafts, conversation transcripts~ etc., have features that differ significantly from \"neat\" texts, posing special problems for readers, such as misspelled words, missing words, poor syntactic constructlon, missing periods, etc. Our solution to these problems is to make use of exoectations, based both on knowledge of surface English and on world knowledge of the situation being described. These syntactic and semantic expectations can be used to figure out unknown words from context, constrain the possible word-senses of words with multiple meanings (ambiguity), fill in missing words (elllpsis), and resolve referents (anaphora). This method of using expectations to aid the understanding of \"scruffy\" texts has been incorporated into a working computer program called NOMAD, which understands scruffy texts in the domain of Navy messages." ], "authors": [ { "name": [ "B. H. Thompson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "P. Hayes", "G. Mouradian" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Mitchell P. Marcus" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Jefferson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Granger" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null ], "s2_corpus_id": [ "1010309", "11007680", "6616065", "143718325", "11326430" ], "intents": [ [ "background" ], [], [ "methodology" ], [ "background" ], [ "background" ] ], "isInfluential": [ false, false, false, false, false ] }
Problem: The paper addresses the mystery of how people can understand non-fluent speech despite its deviation from grammatical norms, and how children can acquire language grammar from a mix of grammatical and ungrammatical strings. Solution: The paper proposes a system of rules for resolving non-fluencies in speech, focusing on self-corrections made by speakers to substitute intended words for wrongly produced words, aiming to transform non-fluent strings into fully grammatical sentences using a set of transformational rules.
500
0.396
null
null
null
null
null
null
null
null
fcbccfd580fad6d2e2e91947f278a5dff40d5df2
5989646
null
A Framework for Processing Partially Free Word Order
The partially free word order in German belongs to the class of phenomena in natttral language that require a close interaction between syntax and pragmatics. Several competing principles, which are based on syntactic and on discourse information, determine the [ineac order of noun phrases. A solution to problems of this sort is a prerequisite for high-quality language generation. The linguistic framework of Generalized Phrase Structure Grammar offers tools for dealing with-word order variation. Some slight modifications to the framework allow for an analysis of the German data that incorporates just the right, degree of interaction between syntactic and pragmatic components and that can account for conflicting ordering statements. I. Introduction The relatively free order of major phrasal constituents in German belongs to the class of natural-language phenomena that require a closer interaction of syntax and pragmatics than is usually accounted for in formal linguistic frameworks. Computational linguists who pay attention to both syntax and pragmatics will find that analyses of such phenomena can provide valuable data for the design of systems that integrate these linguist ic components. German represents a good test case because the role of pragmatics in governing word order is much greater than in English while the role syntax plays is greater than in some of the so-called free-word-order languages like Warlpiri. The German data are well attested and thoroughly discussed in the descriptive literature The fact that English and German are closely related makes it easier to assess these data and to draw parallels. The .~imple analysis presented here for dealing with free word order in German syntax is based on the linguistic framework of Generalized Phrase Structure Grammar (GPSG}, especially on its Immediate Dominance/Linear Precedence formalism {ID/LP), and complements an earlier treatment of German word order) The framework is slightly modified to accommodate the relevant class of word order regularities.
{ "name": [ "Uszkoreit, Hans" ], "affiliation": [ null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
16
6
null
The relatively free order of major phrasal constituents in German belongs to the class of natural-language phenomena that require a closer interaction of syntax and pragmatics than is usually accounted for in formal linguistic frameworks. Computational linguists who pay attention to both syntax and pragmatics will find that analyses of such phenomena can provide valuable data for the design of systems that integrate these linguist ic components.German represents a good test case because the role of pragmatics in governing word order is much greater than in English while the role syntax plays is greater than in some of the so-called free-word-order languages like Warlpiri. The German data are well attested and thoroughly discussed in the descriptive literature The fact that English and German are closely related makes it easier to assess these data and to draw parallels.The .~imple analysis presented here for dealing with free word order in German syntax is based on the linguistic framework of Generalized Phrase Structure Grammar (GPSG}, especially on its Immediate Dominance/Linear Precedence formalism {ID/LP), and complements an earlier treatment of German word order) The framework is slightly modified to accommodate the relevant class of word order regularities.The syntactic framework presented in this paper is not hound to any particular theory of discourse processing; it enables syntax to interact with whatever formal model of pragmatics one might want to implement. A brief discussion of the framework's implication~ for computational implementation centers Upon the problem of the status of metagrammatical devices.German word order is essentially fixed: however, there is some freedom in the ordering of major phrasal categories like NPs and adverbial phrases -for example, in the linear order of subject (SUB J), direct object (DOBJ), and indirect object (lOB J) with respect to one another. All six permutations of these three constituents are possible for sentences like (In). Two are given as {Ib) and (It).(la) Dann hatte der Doktor dem Mann die Pille gegeben.Then had the doctor the man the pill given (lb) Dann hatte dec Doktor die Pille dem Mann gegeben. Then had the doctor the pill the man given (It) Dann hatte die Pille der Doktor dem Mann gegeben. Then had the pill the doctor the man given All permutations have the same truth conditional meaning, which can be paraphrased in English as: Then the doctor gave the man the pill.There are several basic principles that influence the ordering of the three major NPs:• The unmarked order is SUBJ-iOBJ-DOBJ• Comment (or focus) follows non-comments * Personal pronouns precede other NPs• Light constituents precede heavy constituents, *This rese.'trch was supported by the National Science Foundation Grant [ST-RI03$50, The views and conclusions expressed in this paper are those ,,r the :tutbor and should not be interpreted as representative of the views of the Nati.,nal Science Foundation or the United States government. I have benefited fr,~rn discussions with and comments from Barbara Grosz, Fernand,, Pcreira. Jane Robinson. and Stuart Shieber. tThe best overview of the current GPSG framework can be found in Gazdar and Pullum (1982) . For :t description of the II)/LP format refer to Gazdar and Pullum (Ig8l} and Klein (1983) , for the ID/LP treatment of German t,, tszkoreit (]g82a. lgB2b} and Nerbonne (Ig82).The order in (la) is based on the unmarked order, (lb) would be appropriate in a discourse situation that makes the man the focus of the sentence, and (1c) is an acceptable sentence if both doctor and man are focussed upon. l use focus here in the sense of comment, the part of the sentence that contains new important information. (lc) could be uttered as an answer to someone who inquires about both the giver and recipient of the pill (for example, with the question: Who gave whom the pill?l. The most complete description of the ordering principles, especially of the conflict between the unmarked order and the topic-commeni, relation, can be found in Lenerz (1977) .Syntactic as well as pragmatic information is needed to determine the right word order; the unmarked-order principle is obviously a syntactic statement, whereas the topiccomment order principle requires access to discourse information. °, Sometimes different ordering principles make contradictory predictions. Example (lb) violates the unmarked-order principle; (In) is acceptable even if dem Mann [the man] is the focus of the sentence~ 3The interaction of ordering variability and pragmatics can be found in many languages and not only in so-called free-wordorder languages. Consider the following two English sentences: (2a) I will talk to him after lunch about the offer. (2b) I will talk to him about the offer after lunch.Most semantic frameworks would assign the same truthconditional meaning to (2a) and (2b), but there are discourse situations in which one is more appropriate than the other. (2a) can answer a que~-tion about the topic of a planned afternoon meeting, but is much less likely to occur after an order to mention the offer as soon as possible. 4Formal linguistic theories have traditionally assumed the existence of rather independent components for syntax, semantics, and pragmatics, s Linguistics not only could afford this idealization but has probably temporarily benefited from it. However, if the idealization is carried over to the computational implementation of a framework, it can have adverse effects on the efficiency of the resulting system. Peters. 1979) , discourse representations (Kamp, If80) and Situati~,n Semantics ( Barwise and Perry. 1981) narrows the gap between .,,.'mantics and pragmatics.If we as.~ume that a language generation system should be able to generate all grammatical word orders and if we further assume that, every generated order should be appropriate to the given discourse situation, then a truly nonintegrated system, i.e., a system whose semantic, syntactic, and pragmatic components apply in sequence, has to be inel~cient. The syntax will first generate all possibilities, after which the pragmatic component will have to select the appropriate variant. To do so, this component will also need access to syntactic information.In an integrated model, much unnecessary work can be saved if the syntax refrains from using rules that introduce pragmatically inappropriate orders. A truly integrated model can discard improper parses very early during parsing, thereby considerably reducing the amount of syntactic processing.The question of integrating grammatical components is a linguistic problem. Any reasonable solution for an integration of syntax and pragmatics has to depend on linguistic findings about the interaction of syntactic and pragmatic phenomena. An integrated implementation of any theory that does not account for this interaction will either augment the theory or neglect the linguistic facts.By supporting integrated implementations, the framework and analysis to be proposed below fulfill an important condition for effcient treatment of partially free word order.The theory of GPSG is based on the assumption that nat ural languages can be generated by context-free phrase structure (CF-PS) grammars. As we know, such a grammar is bound to exhibit a high degree of redundancy and, consequently, is not the right formalism for encoding many of the linguistic generalizations a framework for natural language is expected to express. However. the presumption is that it is possible to give a condensed inductive definition of the CF-PS grammar, which contains various components for encoding the linguistic regt,laritics and which can be interpreted as a metagrammar, i.e.. a grammar for generating the actual CF-PS grammar.A GPSG can be defined as a two-leveJ grammar containing a metagrammar and an object grammar. The object grammar combines {CF-PS} syntax and model-theoretic semantics. Its rules are ordered triples (n. r. t) where n is an integer (the rule number}, r is a CF-PS rule. and t is the tramlationoft.he rule, its denotation represented in some version of intensional logic. The translation t is actually an operation that maps the translation of the children nodes into the translation of t.he parent. The nonterminals of r are complex symbols, subsets of a finite set of syntactic features or -as in the latest version of the theory (Gazd:w and Pullum, 1982) -feature trees of finite size. The rules o/' the obJect grammar are interpreted as tree-admissability conditions.The metagrammar consists of four different kinds of rules that are used by three major components to generate the object grammar in a stepwise fashion. Figure {3 ) illustrates the basic structure of a GPSG metagrammar.(3){Basic Rules ~N~ IDR doubles)j/ Application~ [ Metarule (IDR doubles) Rule Extension I i IDR triples) I binearization .' l ~{bjeet-G rammar~'X~ F-PS Rules),~/ Metaxules )~Rule Ext. Princpls).First. there is a set of banjo rules. Basic rules are immediate domi.a.ce rule (IDR) double~, ordered pairs < n,i >, where n is the rule number and i is an [DR.1DRs closely resemble CF-PS rules, but, whereas the CF-PS rule "1 --6t 6..... 6. contains information about both immediate dominance and linear precedence in the subtree to be accepted, the corresponding IDR "~ --6t, /f~. ..... /f. encodes only information about immediate dominance. The order of the right-hand-side symbols, which are separated in IDRs by commas, has no significance.Metarule Application, maps [DR doubles to other IDR doubles. For this purpose, metaxules, which are the second kind of rules are applied to basic rules and then to the output of metarule applications to generate more IDR doubles. Metarules are relations between sets of IDRs and are written as A = B, where A and B are rule templates. The metarute can be read as: If there is an IDR double of kind A, then there is also an IDR double of kind /3. In each case the rule number is copied from A to /3. s .Several metarules can apply in the derivation of a single II)R double; however, the principle of Finite Closure, defined by Thompson (1982}, allows every metarule to apply only once in the derivational history of each IDR double. The invocation of this principle avoids the derivation of infinite rule sets, in-6Rule number might he a misleading term for n because this copying :~.ssigns the s~me integer to the whole class of rules that were derived from the ~ame basic rules. This rule number propagation is a prerequisite for the <iPSG accouht of subcategori2ation.eluding those that generate non-CF, non-CS, and noarecursive languagesJ 7Another component maps IDR doubles to IDR triples, which are ordered triples (n,i,t) of a rule number., an IDR i, and a translation t. The symbols of the resulting IDRs axe fully instantiated feature sets (or structures} and therefore identical to object grammar symbols. Thus, this component adds semantic translations and instantiates syntactic features. It is the separation of linear precedence from immediate dominance statements in the metagrammar that is referred to .as ID/LP format. And it is precisely this aspect of the formalism that. makes the theory attractive for application to languages with a high degree of word-urder freedom. The analysis presented in the next section demonstrates the functioning of the formalism and some of its virtues. Uszkoreit (1982a) proposes a GPSG analysis of German word order that accounts for the fixed-order phenomena, including the notoriously difqcult problem of the position of finite and nonfinite verbs. Within the scope of this paper it is impossible to repeat, the whole set of suggested rules. A tiny fragment should sumce to demonstrate the basic ideas as well as the need for modifications of the framework.Rule (41 is the basic VP ID rule that combines ditransitive verbs like forms of gebe. (give) with its two objects:(4} (,5, VP --.NP, NP, V) [+DATI[+ACC]Th,~ rule .~tates that a VP can expand as a dative NP (IOBJ}, an attn.-alive NP (DOBJ), and a verb. Verbs that can occur in dilrnnsitive VPs, like geben (give). are marked in the lexicon with the rule number 5. Nothing has been said about the linear order of these constituents. The following metarule supplies a "flat" sentence rule for each main verb VP rule [+NOM 1 stands for the nominative case, which marks the subject. (5)VP ~ X, V ~ S -.* NP, X, V [-AUX] [+NOM]It generates the rule under (6) from (4):(6) (5, S ---, NP, NP, NP, V) [+ NOMI[+DAT][+ACC]Example 7gives a German constituent that will be admitted by a PS rule derived from ID rule (6):(7} der Doktor dem Mann die Pille gegeben the doctor the man the pill given I shall not list the rules here that combine the auxiliary halle and the temporal adverb dann with (7) to arrive at sentence (la), since these rules play no role in the ordering of the three noun phrases. What is of interest here is the mapping from ID rule (5) to t.he appropriate set of PS rules. Which LP rules are needed to allow for all and only the acceptable linearizations?The position of the verb is a relatively easy matter: if it is the finite matrix verb it precedes the noun phrases; in all other cases, it follows everything else. We have a feature MC for matrix clause as well as a feature co-occurrence restriction to ensure that +MC will always imply +FIN (finite). Two LP rules are needed for the main verb:(Sa) +MC < NP (8b) NP <-MCThe regularities that govern the order of the noun phrases can also be encoded in LP rules, as in (ga)-!ge):(Oa) +NOMINATIVE < +DATIVE (9b) +NOMINATIVE < +ACCUSATIVE (9c) +DATIVE < +ACCUSATIVE (9d) -FOCUS < +FOCUS (9e) +PRONOUN < -PRONOUN(Kart.tunen and Peters, 1979) 8 or a function from discourse situations to the appropriate truth-conditional meaning in the spirit of Barwise and Perry (1981) . The analysis here is not concerned with choosing a formalism for an extended semantic component, but rather with demonstrating where the syntax has to provide for those elements of discourse information that influence the syntactic structure directly.Note, that the new LP rules do not resolve the problem of ordering-principle conflicts, for the violation of one LP rule is enough to rule out an ordering. On the other hand, the absence of these LP rules would incorrectly predict that all permutations are acceptable. The next section introduces a redefinition of LP rules that provides a remedy for this deficiency.Before introducing a new definition of LP rules, let me suggest, anot.her modification that will simplify things somewhat. The I,P rules considered so far are not really LP rules in the sense in which they were defined by their originators. After all. LP rules are defined as members of a partial ordering on "v~,¢ U VT'. Our rules are schemata for LP rules at best, abbreviating the huge set of UP rules that are instantiations of these schemata. This definition is an unfortunate one in several respects. It not. only creates an unnecessarily large set of rules IVN contains thousands of fully instantiated complex symbols) but also suppresses some of the important generalizations about the language. Clearly, one could extract the relevant generalizations even from a fully expanded LP relation, e.g., realize that there is no LP rule whose first element has -MC and its second element NP. However, it should not be necessary to extract generalizations from the grammar; the grammar should express these generalizat.ions directly. Another disadvantage follows from the choice of a procedure for arriving at the fully expanded LP rela-Lion. Should all extensions that are compatible instantiations of (Sa), (Sb). and (9a)-(9e} be LP rules: If so. then (10) is an instantiat.ion of (8a):(I0) +MC' NP +DEF < +FIN,.\ feature FOCUS has been added that designates a focused consf it,eat. Despite its name FOCUS is a syntactic'fcature, justified by syntactic Pacts, such as its influence on word order. This syntactic feature needs t,o be linked with the appropriate discourse information. The place to do this is in the rule exteusioq component, where features are instantiated and semantic translations added to ID rules. It is assumed that in so doing the translation part of rules will have to be extended anyway so as to incorporate non-truth-conditional aspects of the meaning. For example, the full translation could be an ordered pair of truth-conditional and non-truth-conditional content, extending Karttunen and Peters's treatment of conventional implicature Yet nothing can be a matrix verb and definite simultaneously, and NPs cannot be finite. (101 is a vacuous rule. Whether il is a LP rule at all will depend on the way the nonterminal vocabulary of the object grammar is defined. If it only includes the nonterminals that actually occur in rules then (10) is not as LP rule.[n this case we would need a component of the metagrammar, the feature instantiation principles, to determine 8T,~ be more precise. Karttunen and Peters actuaJly make their translati,,ns ordered triples of truth-conditiona.l content, impllcatures, and an inhcrhance expression that plays a role in h~.ndling the projection problem for presuppositions.another compouent of the metagrammar, the LP component. 9 LP will be redefined as a partial order on 2 p, where F is the set of syntactic features I0The second and more important change can best be described by viewing the LP component as a function from a pair of symbols (which can be characterized as feature sets) to truth values, telling us for every pair of symbols whether the first can precede the second in a linearized ru!e. Given the LP relation {(al,~/t),(a~,B~.) ..... (a~,~)} and a pair of complex symbols (3',6), the function can be expressed as in (11). A c,~ A ... A c,~ where c~ ----~(~; _C 6 A #; C: 3') for 1 < i < n ~,Ve call the conjunct clauses LP conditions; the whole conjunction is a complex LP condition. The complex LP condition allows "T to precede /~ on the right-hand side of a CF-PS rule if every LP condition is true. An LP condition ct derived from the LP rule (a~,//i) is true if it is not the case that 3 has the features ;/~ and 6 has the features a¢. Thus the LP rule NP < VP stanch for the following member of the LP relation {{+N,-V, +2B~R}, l-N, +V, +2BAR}). The LP condition following from this rule prevents a superset of {-N, +V, +2BAR} from preceding a superset of l-N, +V, +2BAR}, i.e., a VP from preceding an NP.But notice that there is nothing to prevent us from writing a fictitious LP rule such as (12} +PRONOUN < -ACCUSATIVEGerman has verbs like Ichrcn that take two accusative noun phr~.ses as complements. If {12) were an LP rule then the resulting LP condition defined as in ( l 1 ) would rule out any occurrence of two prouominalized sister NPs because either order would be rejected.l 1It. is an empirical question if one might ever find it useful to write LP rules as in (12}, i.e., rules a < ~/, where a U 3 could be a ~ubset of a complex symbol. Let me introduce a minor redefinition of the interpretation of LP, which will take care of cases such as (12) and at the same prepare the way for a more substantial modification of LP rules. LP shall again be interpreted as a function from pairs of feature sets (associated with complex symbols} to truth values. Given the LP relation {(a1,,'Jl),(oo..;]'.,} ..... (a.,~q~) and a pair of complex symbols 0The widety uscd notation for nomnstantiated LP rules and the feature instantiati,,n principles could be regarded an meta, met.Lgrammatical devices that inductively define a part of"the metagrammar. 10Remember that, in an .~-synta.x. syntactic categories abbreviate feature sets NP ~ {+N, -V, +2BAR}. The definition can emily be extended to work on feature trees instead of feature sets. 1 lln principle, there is nothing in the original ID/LP definition either that would prevent the grammar writer from abbreviating a set of LP rules by (121. It is not quite clear, however, which set of LP rules is abbreviated by (r").(3',/~), the function can be expressed as in (13).(13) ct A c2, A ... A cn where ~, -(a~c6 A B~C3,)-(o~C3, A B, C6)for l < i < nThat means 3' can precede 6 if all LP conditions are true. For instance, the LP condition of LP rule (12) will yield false only if "t is +ACCUSATIVE and # is +PRONOUN, and either 3, is -PRONOUN or 6 is -ACCUSATIVE (or both).-Now let. us assume that, in addition to the kind of simple LP rules just introduced, we can also have complex LP rules consisting of several simple LP rules and notated in curled brackets a.s in (14}:{14) '+NOMINATIVE < +DATIVE ] +NOMINATIVE < +ACCUSATIVE| +DATIVE < +ACCUSATIVE~ -FOCUS < +FOCUS | +PRONOUN < -PRONOUN /The LP condition associated with such a complex LP rule shall be the disjunction of the LP conditions assigned to its members. LP rules can be generally defined as sets of ordered pairs of feature sets {(at,Bt),(a~,~) ..... (am,~/m)}, which are either notated with curled brackets as in (10), or, in the case of singletons, as LP rules of the familiar kind. A complex LP rule {{at, dl), (no_, ,%) ..... {am, B,n)} is interpreted as a LP condition of the following form {(o 1 C 6 A~t C -~)V(a~ C 6 At/= C_ -,)v . vt~.,C6A~,,C_~))--((a, C_3,A3, c_ ~}v(a.. c_ "l A ,'t= C 6)V ... V(am C 3, A dm ~ 6)}. Any of the atomic LP rules within the complex LP rule can be violated as long as the violations are sanctioned by at least one of the atomic LP rules.Notice that with respect to this definition, "regular" LP rules, i.e., sing{elons, can be regarded as a speciaJ case of complex I,P rules.[ want ¢o suggest that the LP rules in {Sa}, (8h), and (l-I} arc a subset of the LP rules of German. This analysis makes a number of empirical predictions. For example, it predicts that (15) and 16 In (17) the sub-LP-rules +DAT < +ACC and -FOCUS < +FOCUS are violated. No other sub-LP-rule legitimizes these violations and therefore the sentence is bad.This agrees with the findings of Lenerz (1977) , who tested a large number of sample sentences in order to determine the interaction of the unmarked syntactic order and the ordering preferences introduced by discourse roles. There are too many possible feature iustantiatious and permutations of the three noun phrases to permit making grammaticality predictions here for a larger sample of ordering variants. So far 1 have not discovered any empirical deficiencies in the proposed analysis.
The theory of GPSG, a,s described by its creators and as outlined in this paper, cannot be used directly for implementation. The number of rules generated by the metagrammar is just too large. The Hewlett-Packard system (Gawron etal., 1982} as well as Henry Thompson's program, which are both based on a pre-ID/LP version of GPSG, use metarules as metagrammatical devices, but with feature iustantiation built into the processor. Agreement checks, however, which correspond to the work of the metagrammatical feature instantiation principles, are done at parse time. As Berwick and Weinberg (1982] have pointed out, the cont ext-freeness of a grammar might not accomplish much when the number of rules explodes. The more components of the metagrammar that can be built into the processor (or used by it as additional rule sets at parse time), the smaller the resulting grammar will be. The task is to search for parsing algorithms that. incorporate the work of the metagrammar into context-free phrase structure parsing without completely losing the parsing time advantages of the latter. Most PSG parsers do feature handling at parse time. Recently, Shieber (forthcoming) has extended the Earley algorithm (Earley 1970) to incorporate the linearization process without a concomitant loss in parsing c~ciency. The redefinition of the LP component proposed in this paper can be intrusted easily and efficiently into Shieber's extension.If the parser uses the disjunctive LP rules to accept all ordering variants that are well-formed with respect to a discourse, there still remains the question of how the generator chooses among the disjuncts in the LP rule. It would be very surprising if the different orderings that can be obtained by choosing one LP rule disjua:t over another did in fact occur with equal frequency. Although there are no clear results that might provide an answer to this question, there are indications that certain disjuntas "win out" more often than others. However, this choice is purely stylistic. A system that is supposed to produce highquality output might contain a stylistic selection mechanism that avoids repe, hions or choose~ among variants according to the tyt:e of text or dialogue.
null
null
The proposed analysis of partially free word order in German makes the accurate predictions about the gram-musicality of ordering variants, including their appropriateness with respect to a given diseo~se. The 1D/LP format, which has the mechanisms to handle free word order, has been extended to account for the interaction of syntax and pragmat.its, as well as for the mutually competing ordering principles. The modifications are compatible with efficient implementation models. The redefined LP component can be used for the implementation of stylistic choice.
Main paper: the problem: German word order is essentially fixed: however, there is some freedom in the ordering of major phrasal categories like NPs and adverbial phrases -for example, in the linear order of subject (SUB J), direct object (DOBJ), and indirect object (lOB J) with respect to one another. All six permutations of these three constituents are possible for sentences like (In). Two are given as {Ib) and (It).(la) Dann hatte der Doktor dem Mann die Pille gegeben.Then had the doctor the man the pill given (lb) Dann hatte dec Doktor die Pille dem Mann gegeben. Then had the doctor the pill the man given (It) Dann hatte die Pille der Doktor dem Mann gegeben. Then had the pill the doctor the man given All permutations have the same truth conditional meaning, which can be paraphrased in English as: Then the doctor gave the man the pill.There are several basic principles that influence the ordering of the three major NPs:• The unmarked order is SUBJ-iOBJ-DOBJ• Comment (or focus) follows non-comments * Personal pronouns precede other NPs• Light constituents precede heavy constituents, *This rese.'trch was supported by the National Science Foundation Grant [ST-RI03$50, The views and conclusions expressed in this paper are those ,,r the :tutbor and should not be interpreted as representative of the views of the Nati.,nal Science Foundation or the United States government. I have benefited fr,~rn discussions with and comments from Barbara Grosz, Fernand,, Pcreira. Jane Robinson. and Stuart Shieber. tThe best overview of the current GPSG framework can be found in Gazdar and Pullum (1982) . For :t description of the II)/LP format refer to Gazdar and Pullum (Ig8l} and Klein (1983) , for the ID/LP treatment of German t,, tszkoreit (]g82a. lgB2b} and Nerbonne (Ig82).The order in (la) is based on the unmarked order, (lb) would be appropriate in a discourse situation that makes the man the focus of the sentence, and (1c) is an acceptable sentence if both doctor and man are focussed upon. l use focus here in the sense of comment, the part of the sentence that contains new important information. (lc) could be uttered as an answer to someone who inquires about both the giver and recipient of the pill (for example, with the question: Who gave whom the pill?l. The most complete description of the ordering principles, especially of the conflict between the unmarked order and the topic-commeni, relation, can be found in Lenerz (1977) . implications for processing models: Syntactic as well as pragmatic information is needed to determine the right word order; the unmarked-order principle is obviously a syntactic statement, whereas the topiccomment order principle requires access to discourse information. °, Sometimes different ordering principles make contradictory predictions. Example (lb) violates the unmarked-order principle; (In) is acceptable even if dem Mann [the man] is the focus of the sentence~ 3The interaction of ordering variability and pragmatics can be found in many languages and not only in so-called free-wordorder languages. Consider the following two English sentences: (2a) I will talk to him after lunch about the offer. (2b) I will talk to him about the offer after lunch.Most semantic frameworks would assign the same truthconditional meaning to (2a) and (2b), but there are discourse situations in which one is more appropriate than the other. (2a) can answer a que~-tion about the topic of a planned afternoon meeting, but is much less likely to occur after an order to mention the offer as soon as possible. 4Formal linguistic theories have traditionally assumed the existence of rather independent components for syntax, semantics, and pragmatics, s Linguistics not only could afford this idealization but has probably temporarily benefited from it. However, if the idealization is carried over to the computational implementation of a framework, it can have adverse effects on the efficiency of the resulting system. Peters. 1979) , discourse representations (Kamp, If80) and Situati~,n Semantics ( Barwise and Perry. 1981) narrows the gap between .,,.'mantics and pragmatics.If we as.~ume that a language generation system should be able to generate all grammatical word orders and if we further assume that, every generated order should be appropriate to the given discourse situation, then a truly nonintegrated system, i.e., a system whose semantic, syntactic, and pragmatic components apply in sequence, has to be inel~cient. The syntax will first generate all possibilities, after which the pragmatic component will have to select the appropriate variant. To do so, this component will also need access to syntactic information.In an integrated model, much unnecessary work can be saved if the syntax refrains from using rules that introduce pragmatically inappropriate orders. A truly integrated model can discard improper parses very early during parsing, thereby considerably reducing the amount of syntactic processing.The question of integrating grammatical components is a linguistic problem. Any reasonable solution for an integration of syntax and pragmatics has to depend on linguistic findings about the interaction of syntactic and pragmatic phenomena. An integrated implementation of any theory that does not account for this interaction will either augment the theory or neglect the linguistic facts.By supporting integrated implementations, the framework and analysis to be proposed below fulfill an important condition for effcient treatment of partially free word order. tile framework of cpsg in id/lp format: The theory of GPSG is based on the assumption that nat ural languages can be generated by context-free phrase structure (CF-PS) grammars. As we know, such a grammar is bound to exhibit a high degree of redundancy and, consequently, is not the right formalism for encoding many of the linguistic generalizations a framework for natural language is expected to express. However. the presumption is that it is possible to give a condensed inductive definition of the CF-PS grammar, which contains various components for encoding the linguistic regt,laritics and which can be interpreted as a metagrammar, i.e.. a grammar for generating the actual CF-PS grammar.A GPSG can be defined as a two-leveJ grammar containing a metagrammar and an object grammar. The object grammar combines {CF-PS} syntax and model-theoretic semantics. Its rules are ordered triples (n. r. t) where n is an integer (the rule number}, r is a CF-PS rule. and t is the tramlationoft.he rule, its denotation represented in some version of intensional logic. The translation t is actually an operation that maps the translation of the children nodes into the translation of t.he parent. The nonterminals of r are complex symbols, subsets of a finite set of syntactic features or -as in the latest version of the theory (Gazd:w and Pullum, 1982) -feature trees of finite size. The rules o/' the obJect grammar are interpreted as tree-admissability conditions.The metagrammar consists of four different kinds of rules that are used by three major components to generate the object grammar in a stepwise fashion. Figure {3 ) illustrates the basic structure of a GPSG metagrammar.(3){Basic Rules ~N~ IDR doubles)j/ Application~ [ Metarule (IDR doubles) Rule Extension I i IDR triples) I binearization .' l ~{bjeet-G rammar~'X~ F-PS Rules),~/ Metaxules )~Rule Ext. Princpls).First. there is a set of banjo rules. Basic rules are immediate domi.a.ce rule (IDR) double~, ordered pairs < n,i >, where n is the rule number and i is an [DR.1DRs closely resemble CF-PS rules, but, whereas the CF-PS rule "1 --6t 6..... 6. contains information about both immediate dominance and linear precedence in the subtree to be accepted, the corresponding IDR "~ --6t, /f~. ..... /f. encodes only information about immediate dominance. The order of the right-hand-side symbols, which are separated in IDRs by commas, has no significance.Metarule Application, maps [DR doubles to other IDR doubles. For this purpose, metaxules, which are the second kind of rules are applied to basic rules and then to the output of metarule applications to generate more IDR doubles. Metarules are relations between sets of IDRs and are written as A = B, where A and B are rule templates. The metarute can be read as: If there is an IDR double of kind A, then there is also an IDR double of kind /3. In each case the rule number is copied from A to /3. s .Several metarules can apply in the derivation of a single II)R double; however, the principle of Finite Closure, defined by Thompson (1982}, allows every metarule to apply only once in the derivational history of each IDR double. The invocation of this principle avoids the derivation of infinite rule sets, in-6Rule number might he a misleading term for n because this copying :~.ssigns the s~me integer to the whole class of rules that were derived from the ~ame basic rules. This rule number propagation is a prerequisite for the <iPSG accouht of subcategori2ation.eluding those that generate non-CF, non-CS, and noarecursive languagesJ 7Another component maps IDR doubles to IDR triples, which are ordered triples (n,i,t) of a rule number., an IDR i, and a translation t. The symbols of the resulting IDRs axe fully instantiated feature sets (or structures} and therefore identical to object grammar symbols. Thus, this component adds semantic translations and instantiates syntactic features. It is the separation of linear precedence from immediate dominance statements in the metagrammar that is referred to .as ID/LP format. And it is precisely this aspect of the formalism that. makes the theory attractive for application to languages with a high degree of word-urder freedom. The analysis presented in the next section demonstrates the functioning of the formalism and some of its virtues. Uszkoreit (1982a) proposes a GPSG analysis of German word order that accounts for the fixed-order phenomena, including the notoriously difqcult problem of the position of finite and nonfinite verbs. Within the scope of this paper it is impossible to repeat, the whole set of suggested rules. A tiny fragment should sumce to demonstrate the basic ideas as well as the need for modifications of the framework.Rule (41 is the basic VP ID rule that combines ditransitive verbs like forms of gebe. (give) with its two objects:(4} (,5, VP --.NP, NP, V) [+DATI[+ACC]Th,~ rule .~tates that a VP can expand as a dative NP (IOBJ}, an attn.-alive NP (DOBJ), and a verb. Verbs that can occur in dilrnnsitive VPs, like geben (give). are marked in the lexicon with the rule number 5. Nothing has been said about the linear order of these constituents. The following metarule supplies a "flat" sentence rule for each main verb VP rule [+NOM 1 stands for the nominative case, which marks the subject. (5)VP ~ X, V ~ S -.* NP, X, V [-AUX] [+NOM]It generates the rule under (6) from (4):(6) (5, S ---, NP, NP, NP, V) [+ NOMI[+DAT][+ACC]Example 7gives a German constituent that will be admitted by a PS rule derived from ID rule (6):(7} der Doktor dem Mann die Pille gegeben the doctor the man the pill given I shall not list the rules here that combine the auxiliary halle and the temporal adverb dann with (7) to arrive at sentence (la), since these rules play no role in the ordering of the three noun phrases. What is of interest here is the mapping from ID rule (5) to t.he appropriate set of PS rules. Which LP rules are needed to allow for all and only the acceptable linearizations?The position of the verb is a relatively easy matter: if it is the finite matrix verb it precedes the noun phrases; in all other cases, it follows everything else. We have a feature MC for matrix clause as well as a feature co-occurrence restriction to ensure that +MC will always imply +FIN (finite). Two LP rules are needed for the main verb:(Sa) +MC < NP (8b) NP <-MCThe regularities that govern the order of the noun phrases can also be encoded in LP rules, as in (ga)-!ge):(Oa) +NOMINATIVE < +DATIVE (9b) +NOMINATIVE < +ACCUSATIVE (9c) +DATIVE < +ACCUSATIVE (9d) -FOCUS < +FOCUS (9e) +PRONOUN < -PRONOUN(Kart.tunen and Peters, 1979) 8 or a function from discourse situations to the appropriate truth-conditional meaning in the spirit of Barwise and Perry (1981) . The analysis here is not concerned with choosing a formalism for an extended semantic component, but rather with demonstrating where the syntax has to provide for those elements of discourse information that influence the syntactic structure directly.Note, that the new LP rules do not resolve the problem of ordering-principle conflicts, for the violation of one LP rule is enough to rule out an ordering. On the other hand, the absence of these LP rules would incorrectly predict that all permutations are acceptable. The next section introduces a redefinition of LP rules that provides a remedy for this deficiency.Before introducing a new definition of LP rules, let me suggest, anot.her modification that will simplify things somewhat. The I,P rules considered so far are not really LP rules in the sense in which they were defined by their originators. After all. LP rules are defined as members of a partial ordering on "v~,¢ U VT'. Our rules are schemata for LP rules at best, abbreviating the huge set of UP rules that are instantiations of these schemata. This definition is an unfortunate one in several respects. It not. only creates an unnecessarily large set of rules IVN contains thousands of fully instantiated complex symbols) but also suppresses some of the important generalizations about the language. Clearly, one could extract the relevant generalizations even from a fully expanded LP relation, e.g., realize that there is no LP rule whose first element has -MC and its second element NP. However, it should not be necessary to extract generalizations from the grammar; the grammar should express these generalizat.ions directly. Another disadvantage follows from the choice of a procedure for arriving at the fully expanded LP rela-Lion. Should all extensions that are compatible instantiations of (Sa), (Sb). and (9a)-(9e} be LP rules: If so. then (10) is an instantiat.ion of (8a):(I0) +MC' NP +DEF < +FIN,.\ feature FOCUS has been added that designates a focused consf it,eat. Despite its name FOCUS is a syntactic'fcature, justified by syntactic Pacts, such as its influence on word order. This syntactic feature needs t,o be linked with the appropriate discourse information. The place to do this is in the rule exteusioq component, where features are instantiated and semantic translations added to ID rules. It is assumed that in so doing the translation part of rules will have to be extended anyway so as to incorporate non-truth-conditional aspects of the meaning. For example, the full translation could be an ordered pair of truth-conditional and non-truth-conditional content, extending Karttunen and Peters's treatment of conventional implicature Yet nothing can be a matrix verb and definite simultaneously, and NPs cannot be finite. (101 is a vacuous rule. Whether il is a LP rule at all will depend on the way the nonterminal vocabulary of the object grammar is defined. If it only includes the nonterminals that actually occur in rules then (10) is not as LP rule.[n this case we would need a component of the metagrammar, the feature instantiation principles, to determine 8T,~ be more precise. Karttunen and Peters actuaJly make their translati,,ns ordered triples of truth-conditiona.l content, impllcatures, and an inhcrhance expression that plays a role in h~.ndling the projection problem for presuppositions.another compouent of the metagrammar, the LP component. 9 LP will be redefined as a partial order on 2 p, where F is the set of syntactic features I0The second and more important change can best be described by viewing the LP component as a function from a pair of symbols (which can be characterized as feature sets) to truth values, telling us for every pair of symbols whether the first can precede the second in a linearized ru!e. Given the LP relation {(al,~/t),(a~,B~.) ..... (a~,~)} and a pair of complex symbols (3',6), the function can be expressed as in (11). A c,~ A ... A c,~ where c~ ----~(~; _C 6 A #; C: 3') for 1 < i < n ~,Ve call the conjunct clauses LP conditions; the whole conjunction is a complex LP condition. The complex LP condition allows "T to precede /~ on the right-hand side of a CF-PS rule if every LP condition is true. An LP condition ct derived from the LP rule (a~,//i) is true if it is not the case that 3 has the features ;/~ and 6 has the features a¢. Thus the LP rule NP < VP stanch for the following member of the LP relation {{+N,-V, +2B~R}, l-N, +V, +2BAR}). The LP condition following from this rule prevents a superset of {-N, +V, +2BAR} from preceding a superset of l-N, +V, +2BAR}, i.e., a VP from preceding an NP.But notice that there is nothing to prevent us from writing a fictitious LP rule such as (12} +PRONOUN < -ACCUSATIVEGerman has verbs like Ichrcn that take two accusative noun phr~.ses as complements. If {12) were an LP rule then the resulting LP condition defined as in ( l 1 ) would rule out any occurrence of two prouominalized sister NPs because either order would be rejected.l 1It. is an empirical question if one might ever find it useful to write LP rules as in (12}, i.e., rules a < ~/, where a U 3 could be a ~ubset of a complex symbol. Let me introduce a minor redefinition of the interpretation of LP, which will take care of cases such as (12) and at the same prepare the way for a more substantial modification of LP rules. LP shall again be interpreted as a function from pairs of feature sets (associated with complex symbols} to truth values. Given the LP relation {(a1,,'Jl),(oo..;]'.,} ..... (a.,~q~) and a pair of complex symbols 0The widety uscd notation for nomnstantiated LP rules and the feature instantiati,,n principles could be regarded an meta, met.Lgrammatical devices that inductively define a part of"the metagrammar. 10Remember that, in an .~-synta.x. syntactic categories abbreviate feature sets NP ~ {+N, -V, +2BAR}. The definition can emily be extended to work on feature trees instead of feature sets. 1 lln principle, there is nothing in the original ID/LP definition either that would prevent the grammar writer from abbreviating a set of LP rules by (121. It is not quite clear, however, which set of LP rules is abbreviated by (r").(3',/~), the function can be expressed as in (13).(13) ct A c2, A ... A cn where ~, -(a~c6 A B~C3,)-(o~C3, A B, C6)for l < i < nThat means 3' can precede 6 if all LP conditions are true. For instance, the LP condition of LP rule (12) will yield false only if "t is +ACCUSATIVE and # is +PRONOUN, and either 3, is -PRONOUN or 6 is -ACCUSATIVE (or both).-Now let. us assume that, in addition to the kind of simple LP rules just introduced, we can also have complex LP rules consisting of several simple LP rules and notated in curled brackets a.s in (14}:{14) '+NOMINATIVE < +DATIVE ] +NOMINATIVE < +ACCUSATIVE| +DATIVE < +ACCUSATIVE~ -FOCUS < +FOCUS | +PRONOUN < -PRONOUN /The LP condition associated with such a complex LP rule shall be the disjunction of the LP conditions assigned to its members. LP rules can be generally defined as sets of ordered pairs of feature sets {(at,Bt),(a~,~) ..... (am,~/m)}, which are either notated with curled brackets as in (10), or, in the case of singletons, as LP rules of the familiar kind. A complex LP rule {{at, dl), (no_, ,%) ..... {am, B,n)} is interpreted as a LP condition of the following form {(o 1 C 6 A~t C -~)V(a~ C 6 At/= C_ -,)v . vt~.,C6A~,,C_~))--((a, C_3,A3, c_ ~}v(a.. c_ "l A ,'t= C 6)V ... V(am C 3, A dm ~ 6)}. Any of the atomic LP rules within the complex LP rule can be violated as long as the violations are sanctioned by at least one of the atomic LP rules.Notice that with respect to this definition, "regular" LP rules, i.e., sing{elons, can be regarded as a speciaJ case of complex I,P rules.[ want ¢o suggest that the LP rules in {Sa}, (8h), and (l-I} arc a subset of the LP rules of German. This analysis makes a number of empirical predictions. For example, it predicts that (15) and 16 In (17) the sub-LP-rules +DAT < +ACC and -FOCUS < +FOCUS are violated. No other sub-LP-rule legitimizes these violations and therefore the sentence is bad.This agrees with the findings of Lenerz (1977) , who tested a large number of sample sentences in order to determine the interaction of the unmarked syntactic order and the ordering preferences introduced by discourse roles. There are too many possible feature iustantiatious and permutations of the three noun phrases to permit making grammaticality predictions here for a larger sample of ordering variants. So far 1 have not discovered any empirical deficiencies in the proposed analysis. implications for implementations: The theory of GPSG, a,s described by its creators and as outlined in this paper, cannot be used directly for implementation. The number of rules generated by the metagrammar is just too large. The Hewlett-Packard system (Gawron etal., 1982} as well as Henry Thompson's program, which are both based on a pre-ID/LP version of GPSG, use metarules as metagrammatical devices, but with feature iustantiation built into the processor. Agreement checks, however, which correspond to the work of the metagrammatical feature instantiation principles, are done at parse time. As Berwick and Weinberg (1982] have pointed out, the cont ext-freeness of a grammar might not accomplish much when the number of rules explodes. The more components of the metagrammar that can be built into the processor (or used by it as additional rule sets at parse time), the smaller the resulting grammar will be. The task is to search for parsing algorithms that. incorporate the work of the metagrammar into context-free phrase structure parsing without completely losing the parsing time advantages of the latter. Most PSG parsers do feature handling at parse time. Recently, Shieber (forthcoming) has extended the Earley algorithm (Earley 1970) to incorporate the linearization process without a concomitant loss in parsing c~ciency. The redefinition of the LP component proposed in this paper can be intrusted easily and efficiently into Shieber's extension.If the parser uses the disjunctive LP rules to accept all ordering variants that are well-formed with respect to a discourse, there still remains the question of how the generator chooses among the disjuncts in the LP rule. It would be very surprising if the different orderings that can be obtained by choosing one LP rule disjua:t over another did in fact occur with equal frequency. Although there are no clear results that might provide an answer to this question, there are indications that certain disjuntas "win out" more often than others. However, this choice is purely stylistic. A system that is supposed to produce highquality output might contain a stylistic selection mechanism that avoids repe, hions or choose~ among variants according to the tyt:e of text or dialogue. conclusion: The proposed analysis of partially free word order in German makes the accurate predictions about the gram-musicality of ordering variants, including their appropriateness with respect to a given diseo~se. The 1D/LP format, which has the mechanisms to handle free word order, has been extended to account for the interaction of syntax and pragmat.its, as well as for the mutually competing ordering principles. The modifications are compatible with efficient implementation models. The redefined LP component can be used for the implementation of stylistic choice. i. introduction: The relatively free order of major phrasal constituents in German belongs to the class of natural-language phenomena that require a closer interaction of syntax and pragmatics than is usually accounted for in formal linguistic frameworks. Computational linguists who pay attention to both syntax and pragmatics will find that analyses of such phenomena can provide valuable data for the design of systems that integrate these linguist ic components.German represents a good test case because the role of pragmatics in governing word order is much greater than in English while the role syntax plays is greater than in some of the so-called free-word-order languages like Warlpiri. The German data are well attested and thoroughly discussed in the descriptive literature The fact that English and German are closely related makes it easier to assess these data and to draw parallels.The .~imple analysis presented here for dealing with free word order in German syntax is based on the linguistic framework of Generalized Phrase Structure Grammar (GPSG}, especially on its Immediate Dominance/Linear Precedence formalism {ID/LP), and complements an earlier treatment of German word order) The framework is slightly modified to accommodate the relevant class of word order regularities.The syntactic framework presented in this paper is not hound to any particular theory of discourse processing; it enables syntax to interact with whatever formal model of pragmatics one might want to implement. A brief discussion of the framework's implication~ for computational implementation centers Upon the problem of the status of metagrammatical devices. Appendix:
null
null
null
null
{ "paperhash": [ "cocchiarella|situations_and_attitudes.", "earley|an_efficient_context-free_parsing_algorithm" ], "title": [ "Situations and Attitudes.", "An efficient context-free parsing algorithm" ], "abstract": [ "In this provocative book, Barwise and Perry tackle the slippery subject of \"meaning, \" a subject that has long vexed linguists, language philosophers, and logicians.", "A parsing algorithm which seems to be the most efficient general context-free algorithm known is described. It is similar to both Knuth's LR(k) algorithm and the familiar top-down algorithm. It has a time bound proportional to n3 (where n is the length of the string being parsed) in general; it has an n2 bound for unambiguous grammars; and it runs in linear time on a large class of grammars, which seems to include most practical context-free programming language grammars. In an empirical comparison it appears to be superior to the top-down and bottom-up algorithms studied by Griffiths and Petrick." ], "authors": [ { "name": [ "N. Cocchiarella", "J. Barwise", "J. Perry" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Earley" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "124893762", "35664" ], "intents": [ [ "background" ], [ "methodology" ] ], "isInfluential": [ false, false ] }
null
500
0.012
null
null
null
null
null
null
null
null
259918795730b7d7b041582ca038e241b9a26ffe
1529030
null
{TELEGRAM}: A Grammar Formalism for Language Planning
Planning provides the basis for a theory of language generation that considers the communicative goals of the speaker when producing utterances. One central problem in designing a system based on such a theory is specifying the requisite linguistic knowledge in a form that interfaces well with a planning system and allows for the encoding of discourse information. The TELEGRAM (TELEological GRAMmar'} system described in this paper solves this problem by annotating a unification grammar with assertions about how grammatical choices are used to achieve various goals, and by enabling the planner to augment the functional description of an utterance as it is being unified. The control structures of the planner and the grammar unifier are then merged in a manner that makes it possible for general planning to be guided by unification of a particular functional description. 1.
{ "name": [ "Appelt, Douglas E." ], "affiliation": [ null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
12
53
null
By viewing language generation as a planning process, one can not only account for the way people use language to satisfy different goals they have in mind, but also model the broad interaction between a speaker's physical and linguistic actions. Formal models of planing can provide the basis for a theory of language generation in which communicative goals play a central role. Recent research in natural-language generation [1] [2] has established the feasibility of regarding planning as the basis for the generation of utterances. This paper examines some of the problems involved in devising a grammar formalism for such a generation system that produces utterances and describes a particular implementation of a unification grammar, referred to as TELEGRAM, that solves some of these problems.The KAMP system [1] was designed with the problems of multiple-goal satisfaction and the integration of physical and linguistic ~etions in mind. KAMP is a multiagent planning system that can be given a high-level description of an agent's goals, and then produce a plan that includes the performance of both physical and linguistic actions by several agents that will achieve the agent's goals. In the development of KAMP it was recognized that syntactic, semanlic and pragmatic knowledge sources are necessary for the planning of utterances. These sources of knowledge were stored independently inside the system: a grammar was provided in addition to the axioms that constitute the agent's knowledge of the pragxnatics of communication. However, rather than have one process that decides what to say, drawing on knowledge about the world and about communication, plus another independant process that decides how to encode that knowledge into English, KA,XlP employs a single process that uses both sources of knowledge to produce plans.The primary focus of the research on KAMP was the representation and integration of the knowledge needed to make plans involving utterances.One area that was neglected was the representation of grammatical knowledge. KAMP relies on a very simple grammar composed of context-free rules that enable it to generate simple sentences. Such phenomena as gapping are totally outside of its capSbility. Because of the ad hoc nature of the representation, modifications and extensions of its linguistic coverage are very difficult.Another criticism of KAMP's approach was that there was no obvious way to control the planning process. Instead of formulaLing a plan quickly. KAMP would search a large space of linguistic alternatives until it found an "(,primal" solution. As some critics have pointed out, (e.g., [51) such exhaustive planning is often not needed in practical ~ituations --and is certainly not how people produce utterances in real time. KAMP would never produce an ungrammatical sentence, because it could always do unlimited backtracking after making an incorrect decision."Flit' remainder of this paper describes how to use a unification grammar* to address these two problems of r,,pr4.s~,ntation and control.
There are several significant differences between TELE-GRAM and other natural-language-generation systems that have been developed using unification grammar or systemic grammar.The TEXT system developed by McKeown [11] uses a unification grammar to generate coherent multisentential text and employs a straightforward unification algorithm. The unifier does not draw upon the system's pragmatic knowledge to decide among alternatives in the grammar, and being reduced to blind search, it requires a great deal of time to unify a single text functional description. The TEXT system does all its planning during the construction of the text FD and uses the unification process to fill in the grammatical details essential for producing the final utterance.The NIGEL grammar designed by Mann [10] is a systemic grammar, but the. philosophies underlying systemic and unification grammar are so similar that a comparison of the systems is warranted. The system "choosers" of NIGEL play a role similar to the annotations on the alternatives in TELEGRAM, and many other parallels can be drawn. The most fundamental difference between the two systems is in the assmptions underlying their design. NIGEL is intended to be completely independent of any particular application system or knowledge representation, an intention that has influenced all aspects of its design. A consequence of this decision is a complete separation of the grammatical processes from the other processes in the system, permitting communication only through a narrow channel. TELEGRAM, on the other hand closely couples reasoning about syntactic choices with the other planning done by the system, thereby enabling the reasoning about combined physical and linguistic actions. However, TEL-EGRAM sacrifices some of the simplicity of the interface between the grammar and the rest of the system.Summary and Conclusion.The TELEGRAM system described in this paper is an at, lempt to incorporate a large grammar into a languageplanning system. This particular approach to representing knowledge in an annotated unification grammar and combining the processes of planning and unification results in the following advantages:•Greater efficiency in the lower levels of the planning process, because the planner can be invoked to decide among alternatives, thus avoiding the reliance upon blind search.• A simple method of resource allocation to the planning process by limiting the amount of backtracking the unifier is allowed to do.• The ability to combine reasoning about physical and linguistic actions with a grammar that provides significantly wide coverage of the language. Although the development of TELEGRAM is still in progress, early experience suggests that the TELEGRAM formalism has sufficient power to represent the syntactic knowledge of a language-planning system that efficiently encompasses a significant portion of English. A small grammar has been written that already has more power than the grammar of KAMP. Research is being conducted in discovering those discourse-related features that have to be included in a unification grammar. Although writing a ~reversible ~ grammar does not appear to be feasible at this time, we hope this research will lead to the specification of a set of features that can be shared between unification grammars for parsing and for generation.
A unification grammar characterizes linguistic entities * (.lnific~tion gramma.r has often been referred to as Junctional grammar in the fiterature, e.g., [7] , Jill. It is related to and shares many ideas with systemic grammar [6] .by collections of features called a functional description (FDs) . Each of the features in an FD has a value that can be either atomic or another functional description. A unification grammar is a large FD that characterizes the features of every possible sentence in the language. In this paper, the FD that characterizes the intended utterance is called the teat FD and the FD that constitutes the grammar is called the grammar FD.The most salient feature of unification grammar that distinguishes it from other grammatical formalisms is its emphasis on linguistic function. All of the features used by the grammar have equal status, with functional and discourse-related features like topic and focus sharing equal status with grammatical roles like subject and predicate, and with syntactic categories like NP and VP.Unification grammars are particularly well suited for language generation because they allow the encoding of discourse features in the grammar. A functional description can be constructed incorporating these features, and the syntactic details of the final utterance can then be specified through unification with the grammar FD. The process that constructs the text FD can treat it as a highlevel blueprint fleshed out by unification, thereby relieving the high-level process of the need to consider low-level grammatical details. This strategy was used by McKeownTwo functional descriptions can be unified by an algorithm that is similar to set union. Suppose FI and F2 are functional descriptions. To compute the unification, Fa, of F, and Fz, written F3 = FI ~ Fz, the following algorithm is used: If any one of the above conditions fails, then the unification itself fails and the value of F1 ~ F2 is undefined.If (A,vFunctional descriptions can optionally contain a distinguished feature called PATTERN that is used to specify the surface order of constituents in the FD. The unification of two patterns is different in that it is based on deciding whether or not the orderings represented by the two patterns are consistent. In spite of its advantages, there are some serious problems with unification grammar if it is employed straightforwardly in a language planning system. One of the most serious problems is the inefficiency of the unificat;,,--!gorithm as described above. A straightforward application of that algorithm is very expensive, consuming an orderof-magnitude more time in the unification process than in the entire planning process leading up to the construction of the text FD [11] . The problem is not simply one of efficiency, of implementation. It is inherent in any algorithm that searches alternatives blindly and thereby does work that is exponentially related to the number of alternatives in the grammar. Any solution to the problem must be a conceptual one that minimizes the number of alternatives that ever have to be considered.Another problem is that the text FD is not as high-level a blueprint as is really needed because every feature related to the speaker's intention to communicate must be part of the text FD when unification takes place. This implies, for example, that every descriptor that is part of a referring expression must be specified in advance. This may be unnecessary because for certain grammatical choices, the referring expression may be eliminated entirely. For example, in the by-phrase in a passive sentence, reference may be made pronominally {or not at all), in which case descriptors are unnecessary. Since the planner must know the linguistic context when planning descriptors, a nounphrase FD is best constructed initially with a REFERENT feature, and later expanded by adding features that correspond to the descriptors.While it is conceivable that the grammar could be designed to expand a REFERENT feature into a set of descriptors, that would amount to encoding in the grammar what is essentially a planning problem. This is undesirable because the grammar, being a repository of syntactic knowledge, should be separated from pragmatic knowledge. Conversely, it is also desireable to separate detailed syntactic knowledge from the planner, and the failure to do so was a major shortcoming with KAMP.The next section describes how unification and planning can be combined to allow syntactic knowledge to be separated from the planner, but still allow the required flexibility of interaction between the planner and the grammar.The TELEGRAM system solves the problems of efficiency and modularity through a close coupling between the processes of unification and planning. (The name TELEGRAM stands for TELEological GRAMmar because planning and goal satisfaction are integreated into the unification process.) K.AMP divided its actions into an abstraction hierarchy. The action hierarchy, as it pertains to linguistic actions, PropodUo~ ActsUtterance Acts ] Figure 3 .1. Actions called illocutionary acts are at the top of the hierarchy, with surface speech acts and concept activation actions falling below, while the actual performance of the utterance is at the lowest level. lllocutionary acts are easily described at an abstract level that. is best reasoned about by a conventional planning system, as was done in K.AMP [|1 and by Cohen [2 I. However, as one progresses down the hierarchy, the planning becomes more and more dependent on the constraints of the grammar, although goal satisfaction is still very much a part of the reasoning that takes place. It is at the level of surface speech act and concept activation actions that the planning and unification processes can be most advantageously merged.The means of combining planning and unification works as follows. At the time the planner plans to perform a surface speech act, enough information has been specified so that it knows the general syntactic structure of the sentence (declarative, interrogative, or imperative}. A functional description of the utterance is created and then ~mified with the grammar.This functional description is very general and does not contain suMcient information to specify a unique sentence. The functional description is elaborated during the process of unification so that it adds features incrementally to the functional description. The planner is called upon by the unification algorithm at the appropriate time to add the appropriate features. The end result is a functional description that is the same as if a complete functional description of the intended utterance had been unified with the grammar by means of a conventional unification algorithm that does not invoke planning.The planner is invoked by the unifier when either of two situations arises:The unifier detects a feature in the text FD that has no corresponding feature in the grammar FD. Such features are a signal that elaboration must be performed. The feature is annotated with a goal wff that the planner plans to achieve, and the resulting actions specify additions to the functional description being unified. The unification process then continues in the normal manner.The unifier detects a choice in the grammar functional description that cannot be resolved through the unification of atomic features. Each choice in the grammar is annotated with a wff that describes to the planner what the effects of making the choice will be. The planner then decides which alternative is most consistent with its plans, making an arbitrary choice if insufficient information is available for a decision.The combination of planning and unification that results has a number of benefits resulting from annotating a grammar with information useful to the planner, rather than trying to work linguistic knowledge into the planner in an ad hoc manner.The ability to perform action subsumption, the opportunistic "piggybacking" of related goal~ as described in Ill, is enhanced. Whether or not one can incorporate additional nonreferring descriptors into a noun phrase is governed by the structure and function of the noun phrase being planned. For example, a pronominal reference cannot incorporate any additional descriptors at all. Therefore, if a planner were to decide whether or not to perform action subsumption, it would have to know in advance how a referent was going to be realized. If this were to be performed before unification, the planner would have to have the detailed lin~-uistic knowledge to know that it was possible. With a simple grammar like KAMP'S this was possible, but with a larger grammar it is clearly undesirable.The ability to do multiple-utterance and discourse planning is also enhanced. Since the grammar and planner are closely coupled, information can be easily fed back from the ~rammar to the planner. This feedback is one of the features that distinguish a language planning system from a system that first decides what to say, then how to say it. When an alternative is chosen, the planner has information about the goal that is to be achieved through the selection of that alternative. If unification based on that selection fails, the planner, instead of blindly trying other alternatives, can revise the entire plan --including the incorporation of multiple utterances where only one was planned originally.Example.This example illustrates how a language system can use an annotated unification gramar like TELEGRAM. Suppose that there are two agents operating in an equipment assembly domain, and that the planning agent decides that the other agent should know that the location of a screwdriver S I is in a particular toolbox, TB1. He then plans the illocutionary act*" Do(AGT1, Inform(AGT2, Location(S1) = TB1)).The planner then plans a surface speech act consisting of a declarative sentence with the same propositional content as the illocutionary act. However, instead of constructing a syntactic-structure tree by using context-free rules, as in K.AMP would do in this example, the TELEGRAM planner will create a high-level functional description of the intended utterance. For this example, the functional description would look like the following. At this point., the planner is no longer directly in control of the planning process. The planner invokes the unifier with the above text functional description and the grammar fimctional description, and relinquishes control to the unifical ion process. The unification process follows the algorithm described in Section 2, until there is either an alternative in the grammar that needs to be selected or some feature in the text FD does not unify with any feature in the grammar FD.In this example, the second of these situations arises when the noun phrase FD The FD for the noun phrase tells what the structure of the constituent is, but it does not contain a REFERENT feature. The straightforward application of the unification algorithm of Section 2 would simply yield the grammar FD along with the feature "REFERENT ~ TBI," which is not particularly useful. However, the feature REFERENT has an annotation that tells the unifier that the planner should be invoked with the goal of activating the concept TBI for AGT2. The planner then plans a concept activation action, using its knowledge about AGTI and AGT2's mutual knowledge, perhaps inserts a pointing action into the plan, and augments the text FD to resemble the following:CAT = NP 1 DESC = (Toolbox(TBl), Under(TBl, TABLEU)JThe new augmented functional description still does not unify with the grammar FD, but the annotation for the DESC feature is written to insert FDs corresponding to each of the descriptors into the text FD. This next expansion results in the following FD: This FD can be unified directly with the grammar FD, using the algorithm described in section 2. It is identical to the one that would have been planned had the entire FD been specified at the start of the unification process. However, by postponing some of the planning, and placing it under control of the unification process, the system preserves the ability to plan hierarchically while enhancing its ability to coordinate physical and linguistic actions."
null
null
Main paper: introduction: By viewing language generation as a planning process, one can not only account for the way people use language to satisfy different goals they have in mind, but also model the broad interaction between a speaker's physical and linguistic actions. Formal models of planing can provide the basis for a theory of language generation in which communicative goals play a central role. Recent research in natural-language generation [1] [2] has established the feasibility of regarding planning as the basis for the generation of utterances. This paper examines some of the problems involved in devising a grammar formalism for such a generation system that produces utterances and describes a particular implementation of a unification grammar, referred to as TELEGRAM, that solves some of these problems.The KAMP system [1] was designed with the problems of multiple-goal satisfaction and the integration of physical and linguistic ~etions in mind. KAMP is a multiagent planning system that can be given a high-level description of an agent's goals, and then produce a plan that includes the performance of both physical and linguistic actions by several agents that will achieve the agent's goals. In the development of KAMP it was recognized that syntactic, semanlic and pragmatic knowledge sources are necessary for the planning of utterances. These sources of knowledge were stored independently inside the system: a grammar was provided in addition to the axioms that constitute the agent's knowledge of the pragxnatics of communication. However, rather than have one process that decides what to say, drawing on knowledge about the world and about communication, plus another independant process that decides how to encode that knowledge into English, KA,XlP employs a single process that uses both sources of knowledge to produce plans.The primary focus of the research on KAMP was the representation and integration of the knowledge needed to make plans involving utterances.One area that was neglected was the representation of grammatical knowledge. KAMP relies on a very simple grammar composed of context-free rules that enable it to generate simple sentences. Such phenomena as gapping are totally outside of its capSbility. Because of the ad hoc nature of the representation, modifications and extensions of its linguistic coverage are very difficult.Another criticism of KAMP's approach was that there was no obvious way to control the planning process. Instead of formulaLing a plan quickly. KAMP would search a large space of linguistic alternatives until it found an "(,primal" solution. As some critics have pointed out, (e.g., [51) such exhaustive planning is often not needed in practical ~ituations --and is certainly not how people produce utterances in real time. KAMP would never produce an ungrammatical sentence, because it could always do unlimited backtracking after making an incorrect decision."Flit' remainder of this paper describes how to use a unification grammar* to address these two problems of r,,pr4.s~,ntation and control. unification grammar: A unification grammar characterizes linguistic entities * (.lnific~tion gramma.r has often been referred to as Junctional grammar in the fiterature, e.g., [7] , Jill. It is related to and shares many ideas with systemic grammar [6] .by collections of features called a functional description (FDs) . Each of the features in an FD has a value that can be either atomic or another functional description. A unification grammar is a large FD that characterizes the features of every possible sentence in the language. In this paper, the FD that characterizes the intended utterance is called the teat FD and the FD that constitutes the grammar is called the grammar FD.The most salient feature of unification grammar that distinguishes it from other grammatical formalisms is its emphasis on linguistic function. All of the features used by the grammar have equal status, with functional and discourse-related features like topic and focus sharing equal status with grammatical roles like subject and predicate, and with syntactic categories like NP and VP.Unification grammars are particularly well suited for language generation because they allow the encoding of discourse features in the grammar. A functional description can be constructed incorporating these features, and the syntactic details of the final utterance can then be specified through unification with the grammar FD. The process that constructs the text FD can treat it as a highlevel blueprint fleshed out by unification, thereby relieving the high-level process of the need to consider low-level grammatical details. This strategy was used by McKeownTwo functional descriptions can be unified by an algorithm that is similar to set union. Suppose FI and F2 are functional descriptions. To compute the unification, Fa, of F, and Fz, written F3 = FI ~ Fz, the following algorithm is used: If any one of the above conditions fails, then the unification itself fails and the value of F1 ~ F2 is undefined.If (A,vFunctional descriptions can optionally contain a distinguished feature called PATTERN that is used to specify the surface order of constituents in the FD. The unification of two patterns is different in that it is based on deciding whether or not the orderings represented by the two patterns are consistent. In spite of its advantages, there are some serious problems with unification grammar if it is employed straightforwardly in a language planning system. One of the most serious problems is the inefficiency of the unificat;,,--!gorithm as described above. A straightforward application of that algorithm is very expensive, consuming an orderof-magnitude more time in the unification process than in the entire planning process leading up to the construction of the text FD [11] . The problem is not simply one of efficiency, of implementation. It is inherent in any algorithm that searches alternatives blindly and thereby does work that is exponentially related to the number of alternatives in the grammar. Any solution to the problem must be a conceptual one that minimizes the number of alternatives that ever have to be considered.Another problem is that the text FD is not as high-level a blueprint as is really needed because every feature related to the speaker's intention to communicate must be part of the text FD when unification takes place. This implies, for example, that every descriptor that is part of a referring expression must be specified in advance. This may be unnecessary because for certain grammatical choices, the referring expression may be eliminated entirely. For example, in the by-phrase in a passive sentence, reference may be made pronominally {or not at all), in which case descriptors are unnecessary. Since the planner must know the linguistic context when planning descriptors, a nounphrase FD is best constructed initially with a REFERENT feature, and later expanded by adding features that correspond to the descriptors.While it is conceivable that the grammar could be designed to expand a REFERENT feature into a set of descriptors, that would amount to encoding in the grammar what is essentially a planning problem. This is undesirable because the grammar, being a repository of syntactic knowledge, should be separated from pragmatic knowledge. Conversely, it is also desireable to separate detailed syntactic knowledge from the planner, and the failure to do so was a major shortcoming with KAMP.The next section describes how unification and planning can be combined to allow syntactic knowledge to be separated from the planner, but still allow the required flexibility of interaction between the planner and the grammar. combination of unification and planning: The TELEGRAM system solves the problems of efficiency and modularity through a close coupling between the processes of unification and planning. (The name TELEGRAM stands for TELEological GRAMmar because planning and goal satisfaction are integreated into the unification process.) K.AMP divided its actions into an abstraction hierarchy. The action hierarchy, as it pertains to linguistic actions, PropodUo~ ActsUtterance Acts ] Figure 3 .1. Actions called illocutionary acts are at the top of the hierarchy, with surface speech acts and concept activation actions falling below, while the actual performance of the utterance is at the lowest level. lllocutionary acts are easily described at an abstract level that. is best reasoned about by a conventional planning system, as was done in K.AMP [|1 and by Cohen [2 I. However, as one progresses down the hierarchy, the planning becomes more and more dependent on the constraints of the grammar, although goal satisfaction is still very much a part of the reasoning that takes place. It is at the level of surface speech act and concept activation actions that the planning and unification processes can be most advantageously merged.The means of combining planning and unification works as follows. At the time the planner plans to perform a surface speech act, enough information has been specified so that it knows the general syntactic structure of the sentence (declarative, interrogative, or imperative}. A functional description of the utterance is created and then ~mified with the grammar.This functional description is very general and does not contain suMcient information to specify a unique sentence. The functional description is elaborated during the process of unification so that it adds features incrementally to the functional description. The planner is called upon by the unification algorithm at the appropriate time to add the appropriate features. The end result is a functional description that is the same as if a complete functional description of the intended utterance had been unified with the grammar by means of a conventional unification algorithm that does not invoke planning.The planner is invoked by the unifier when either of two situations arises:The unifier detects a feature in the text FD that has no corresponding feature in the grammar FD. Such features are a signal that elaboration must be performed. The feature is annotated with a goal wff that the planner plans to achieve, and the resulting actions specify additions to the functional description being unified. The unification process then continues in the normal manner.The unifier detects a choice in the grammar functional description that cannot be resolved through the unification of atomic features. Each choice in the grammar is annotated with a wff that describes to the planner what the effects of making the choice will be. The planner then decides which alternative is most consistent with its plans, making an arbitrary choice if insufficient information is available for a decision.The combination of planning and unification that results has a number of benefits resulting from annotating a grammar with information useful to the planner, rather than trying to work linguistic knowledge into the planner in an ad hoc manner.The ability to perform action subsumption, the opportunistic "piggybacking" of related goal~ as described in Ill, is enhanced. Whether or not one can incorporate additional nonreferring descriptors into a noun phrase is governed by the structure and function of the noun phrase being planned. For example, a pronominal reference cannot incorporate any additional descriptors at all. Therefore, if a planner were to decide whether or not to perform action subsumption, it would have to know in advance how a referent was going to be realized. If this were to be performed before unification, the planner would have to have the detailed lin~-uistic knowledge to know that it was possible. With a simple grammar like KAMP'S this was possible, but with a larger grammar it is clearly undesirable.The ability to do multiple-utterance and discourse planning is also enhanced. Since the grammar and planner are closely coupled, information can be easily fed back from the ~rammar to the planner. This feedback is one of the features that distinguish a language planning system from a system that first decides what to say, then how to say it. When an alternative is chosen, the planner has information about the goal that is to be achieved through the selection of that alternative. If unification based on that selection fails, the planner, instead of blindly trying other alternatives, can revise the entire plan --including the incorporation of multiple utterances where only one was planned originally.Example.This example illustrates how a language system can use an annotated unification gramar like TELEGRAM. Suppose that there are two agents operating in an equipment assembly domain, and that the planning agent decides that the other agent should know that the location of a screwdriver S I is in a particular toolbox, TB1. He then plans the illocutionary act*" Do(AGT1, Inform(AGT2, Location(S1) = TB1)).The planner then plans a surface speech act consisting of a declarative sentence with the same propositional content as the illocutionary act. However, instead of constructing a syntactic-structure tree by using context-free rules, as in K.AMP would do in this example, the TELEGRAM planner will create a high-level functional description of the intended utterance. For this example, the functional description would look like the following. At this point., the planner is no longer directly in control of the planning process. The planner invokes the unifier with the above text functional description and the grammar fimctional description, and relinquishes control to the unifical ion process. The unification process follows the algorithm described in Section 2, until there is either an alternative in the grammar that needs to be selected or some feature in the text FD does not unify with any feature in the grammar FD.In this example, the second of these situations arises when the noun phrase FD The FD for the noun phrase tells what the structure of the constituent is, but it does not contain a REFERENT feature. The straightforward application of the unification algorithm of Section 2 would simply yield the grammar FD along with the feature "REFERENT ~ TBI," which is not particularly useful. However, the feature REFERENT has an annotation that tells the unifier that the planner should be invoked with the goal of activating the concept TBI for AGT2. The planner then plans a concept activation action, using its knowledge about AGTI and AGT2's mutual knowledge, perhaps inserts a pointing action into the plan, and augments the text FD to resemble the following:CAT = NP 1 DESC = (Toolbox(TBl), Under(TBl, TABLEU)JThe new augmented functional description still does not unify with the grammar FD, but the annotation for the DESC feature is written to insert FDs corresponding to each of the descriptors into the text FD. This next expansion results in the following FD: This FD can be unified directly with the grammar FD, using the algorithm described in section 2. It is identical to the one that would have been planned had the entire FD been specified at the start of the unification process. However, by postponing some of the planning, and placing it under control of the unification process, the system preserves the ability to plan hierarchically while enhancing its ability to coordinate physical and linguistic actions." comparison with related systems.: There are several significant differences between TELE-GRAM and other natural-language-generation systems that have been developed using unification grammar or systemic grammar.The TEXT system developed by McKeown [11] uses a unification grammar to generate coherent multisentential text and employs a straightforward unification algorithm. The unifier does not draw upon the system's pragmatic knowledge to decide among alternatives in the grammar, and being reduced to blind search, it requires a great deal of time to unify a single text functional description. The TEXT system does all its planning during the construction of the text FD and uses the unification process to fill in the grammatical details essential for producing the final utterance.The NIGEL grammar designed by Mann [10] is a systemic grammar, but the. philosophies underlying systemic and unification grammar are so similar that a comparison of the systems is warranted. The system "choosers" of NIGEL play a role similar to the annotations on the alternatives in TELEGRAM, and many other parallels can be drawn. The most fundamental difference between the two systems is in the assmptions underlying their design. NIGEL is intended to be completely independent of any particular application system or knowledge representation, an intention that has influenced all aspects of its design. A consequence of this decision is a complete separation of the grammatical processes from the other processes in the system, permitting communication only through a narrow channel. TELEGRAM, on the other hand closely couples reasoning about syntactic choices with the other planning done by the system, thereby enabling the reasoning about combined physical and linguistic actions. However, TEL-EGRAM sacrifices some of the simplicity of the interface between the grammar and the rest of the system.Summary and Conclusion.The TELEGRAM system described in this paper is an at, lempt to incorporate a large grammar into a languageplanning system. This particular approach to representing knowledge in an annotated unification grammar and combining the processes of planning and unification results in the following advantages:•Greater efficiency in the lower levels of the planning process, because the planner can be invoked to decide among alternatives, thus avoiding the reliance upon blind search.• A simple method of resource allocation to the planning process by limiting the amount of backtracking the unifier is allowed to do.• The ability to combine reasoning about physical and linguistic actions with a grammar that provides significantly wide coverage of the language. Although the development of TELEGRAM is still in progress, early experience suggests that the TELEGRAM formalism has sufficient power to represent the syntactic knowledge of a language-planning system that efficiently encompasses a significant portion of English. A small grammar has been written that already has more power than the grammar of KAMP. Research is being conducted in discovering those discourse-related features that have to be included in a unification grammar. Although writing a ~reversible ~ grammar does not appear to be feasible at this time, we hope this research will lead to the specification of a set of features that can be shared between unification grammars for parsing and for generation. Appendix:
null
null
null
null
{ "paperhash": [ "mann|nigel:_a_systemic_grammar_for_text_generation.", "cohen|dependencies_of_discourse_structure_on_the_modality_of_communication:_telephone_vs._teletype", "conklin|salience:_the_key_to_the_selection_problem_in_natural_language_generation", "cohen|elements_of_a_plan-based_theory_of_speech_acts", "allen|a_functional_grammar", "mckeown|generating_natural_language_text_in_response_to_questions_about_database_structure", "appelt|planning_natural_language_utterances_to_satisfy_multiple_goals" ], "title": [ "Nigel: A Systemic Grammar for Text Generation.", "Dependencies of Discourse Structure on the Modality of Communication: Telephone vs. Teletype", "Salience: The Key to the Selection Problem in Natural Language Generation", "Elements of a Plan-Based Theory of Speech Acts", "A Functional Grammar", "Generating natural language text in response to questions about database structure", "Planning natural language utterances to satisfy multiple goals" ], "abstract": [ "Abstract : Programming a computer to write text which meets a prior need is a challenging research task. As part of such research, Nigel, a large programmed grammar of English, has been created in the framework of systemic linguistics begun by Halliday. In addition to specifying function and structures of English, Nigel has a novel semantic stratum which specifies the situations in which each grammatical feature should be used. The report consists of three papers on Nigel: an introductory overview, the script of a demonstration of its use in generation, and an exposition of how Nigel relates to the systemic framework. Although the effort to develop Nigel is significant both as computer science research and as linguistic inquiry the outlook of the report is oriented to its linguistic significance.", "A desirable long-range goal in building future speech understanding systems would be to accept the kind of language people spontaneously produce. We show that people do not speak to one another in the same way they converse in typewritten language. Spoken language is finer-grained and more indirect. The differences are striking and pervasive. Current techniques for engaging in typewritten dialogue will need to be extended to accomodate the structure of spoken language.", "We argue that in domains where a strong notion of salience can be defined, it can be used to provide: (1) an elegant solution to the selection problem, i.e. the problem of how to decide whether a given fact should or should not be mentioned in the text; and (2) a simple and direct control framework for the entire deep generation process, coordinating proposing, planning, and realization. (Deep generation involves reasoning about conceptual and rhetorical facts, as opposed to the narrowly linguistic reasoning that takes place during realization.) We report on an empirical study of salience in pictures of natural scenes, and its use in a computer program that generates descriptive paragraphs comparable to those produced by people.", "This paper explores the truism that people think about what they say. It proposes hat, to satisfy their own goals, people often plan their speech acts to affect their listeners' beliefs, goals, and emotional states. Such language use can be modelled by viewing speech acts as operators in a planning system, thus allowing both physical and speech acts to be integrated into plans. \n \nMethodological issues of how speech acts should be defined in a plan-based theory are illustrated by defining operators for requesting and informing. Plans containing those operators are presented and comparisons are drawn with Searle's formulation. The operators are shown to be inadequate since they cannot be composed to form questions (requests to inform) and multiparty requests (requests to request). By refining the operator definitions and by identifying some of the side effects of requesting, compositional adequacy is achieved. The solution leads to a metatheoretical principle for modelling speech acts as planning operators.", "Functional Grammar describes grammar in functional terms in which a language is interpreted as a system of meanings. The language system consists of three macro-functions known as meta-functional components: the interpersonal function, the ideational function, and the textual function, all of which make a contribution to the structure of a text. The concepts discussed in Functional Grammar aims at giving contribution to the understanding of a text and evaluation of a text, which can be applied for text analysis. Using the concepts in Functional Grammar, English teachers may help the students learn how various grammatical features and grammatical systems are used in written texts so that they can read and write better.", "There are two major aspects of computer-based text generation: (1) determining the content and textual shape of what is to be said; and (2) transforming that message into natural language. Emphasis in this research has been on a computational solution to the questions of what to say and how to organize it effectively. A generation method was developed and implemented in a system called TEXT that uses principles of discourse structure, discourse coherency, and relevancy criterion. \nThe main features of the generation method developed for the TEXT strategic component include (1) selection of relevant information for the answer, (2) the pairing of rhetorical techniques for communication (such as analogy) with discourse purposes (for example, providing definitions) and (3) a focusing mechanism. Rhetorical techniques, which encode aspects of discourse structure, are used to guide the selection of propositions from a relevant knowledge pool. The focusing mechanism aids in the organization of the message by constraining the selection of information to be talked about next to that which ties in with the previous discourse in an appropriate way. \nThis work on generation has been done within the framework of a natural language interface to a database system. The implemented system generates responses of paragraph length to questions about database structure. Three classes of questions have been considered: questions about information available in the database, requests for definitions, and questions about the differences between database entities. \nThe main theoretical results of this research have been on the effect of discourse structure and focus constraints on the generation process. A computational treatment of rhetorical devices has been developed which is used to guide the generation process. Previous work on focus of attention has been extended for the task of generation to provide constraints on what to say next. The use of these two interacting mechanisms constitutes a departure from earlier generation systems. The approach taken in this research is that the generation process should not simply trace the knowledge representation to produce text. Instead, communicative strategies people are familiar with are used to effectively convey information. This means that the same information may be described in different ways on different occasions.", "This dissertation presents the results of research on a planning formalism for a theory of natural language generation that incorporates generation of utterances that satisfy multiple goals. Previous research in the area of computer generation of natural language utterances has concentrated on one of two aspects of language production: (1) the process of producing surface syntactic forms from an underlying representation, and (2) the planning of illocutionary acts to satisfy the speaker's goals. This work concentrates on the interaction between these two aspects of language generation and considers the overall problem to be one of refining the specification of an illocutionary act into a surface syntactic form, emphasizing the problems of achieving multiple goals in a single utterance. \nPlanning utterances requires an ability to do detailed reasoning about what the hearer knows and wants. A formalism, based on a possible worlds semantics of an intensional logic of knowledge and action, was developed for representing the effects of illocutionary acts and the speaker's beliefs about the hearer's knowledge of the world. Techniques are described that enable a planning system to use the representation effectively. \nThe language planning theory and knowledge representation are embodied in a computer system called KAMP (Knowledge And Modalities Planner) which plans both physical and linguistic actions, given a high level description of the speaker's goal. \nThe research has application to the design of gracefully interacting computer systems, multiple-agent planning systems, and planning to acquire knowledge." ], "authors": [ { "name": [ "W. Mann", "C. Matthiessen" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Philip R. Cohen", "S. Fertig", "Kathy Starr" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "E. J. Conklin", "David D. McDonald" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Philip R. Cohen", "C. Raymond Perrault" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "H. B. Allen", "M. Bryant" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "K. McKeown" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Appelt" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null ], "s2_corpus_id": [ "57089912", "8424668", "5189365", "2166355", "150098969", "62743223", "60491098" ], "intents": [ [], [], [], [], [ "background" ], [], [ "methodology", "background" ] ], "isInfluential": [ false, false, false, false, false, false, true ] }
- Problem: Designing a system for language generation that considers communicative goals and interfaces well with a planning system while encoding discourse information. - Solution: The TELEGRAM system solves this problem by annotating a unification grammar with assertions about how grammatical choices achieve goals and enabling the planner to augment the functional description of an utterance during unification.
500
0.106
null
null
null
null
null
null
null
null
3d7ebfdff3b800d4b7fcba880b226a42538f47fb
65486
null
A Foundation for Semantic Interpretation
Traditionally, translation from the parse tree representing a sentence to a semantic representation (such as frames or procedural semantics) has a/ways been the most ad hoc part of natural language understand-•ng (NLU) systems. However, recent advances in linguistics, most notably the system of formal semantics known as Montague semantics, suggest ways of putting NLU semantics onto a cleaner and firmer foundation. We are using a Montague-inspired approach to semantics in an integrated NL U and pro blem-solving system that we are building. Like Montague's, our semantics are compositional by design and strongly typed, with semantic rules in one-to-one correspondence with the meaning-affecting rules of a Marcus-style parser. We have replaced Montague's semantic objects, functors and truth conditions, with the elements of the frame language Frail, and added a word sense and case slot disambiguation system. The result is a foundation for semantic interpretation that we believe to be superior ~o previous approaches.
{ "name": [ "Hirst, Graeme" ], "affiliation": [ null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
60
15
null
By semantic interpretation we mean the process of mapping from a syntactically analyzed sentence of natural language to a representation of its meaning. We exclude from semantic interpretation any consideration of discourse pragmatics; rather, discourse pragmatics operate upon the output of the semantic interpreter. We also exclude syntactic analysis; the integration of syntactic and semantic analysis becomes very messy when complex syntactic constructions are considered, and, moreover, it is our observation that those who argue for the integration of the two are usually arguing for subordinating the role of syntax, a position we reject. This is not to say that parsing can get by without semantic help; indirect object finding, and prepositional phrase and relative clause attachment, for example, often require semantic knowledge.Below we will show that syntax and semantics may work well together while remaining distinct modules.Research on semantic interpretation in artificial intelligence goes back to Woods's dissertation (1967 Woods's dissertation ( , 1968 , which introduced procedural semantics in a natural-language front-end for an airline reservation system. Woods's system had rules with patterns that, when they matched part of the parsed input sentence, contributed a string to the semantic representation of the sentence. This string was usually constructed from the terminals of the matched parse tree fragment. The strings were combined to form a procedure call that, when evaluated, entered or retrieved the appropriate database information. This approach is still the predominant one today, and even though it has been refined over the years, semantic interpretation remains perhaps the least understood and most ad hoc area of natural language understanding (NLU).I However, recent advances in linguistics, most notably Montague semantics (Montague 1973; Dowry, Wall and Peters 1981) , suggest ways of putting NLU semantic interpretation on a cleaner and firmer foundation than it now is. In this paper, we describe such a foundation. 2In his well-known "PTQ" paper (Montague 1973) , Richard Montague presented the complete syntax and semantics for a small fragment of English. Although it was limited in vocabulary and syntactic complexity, Montague's fragment dealt with such imporlit is also philosophically controversial. For discussion, see Fodor 1978 , Johnson-Laird 1978 , Fodor 1979 , and Wilks 1982 is not the only current work with this Ko~tl; in Section 7 we discuse other similarly motivated work, tant semantic problems as opaque contexts, different types of predication with the word be, and the "the temperature is 90" problem; 3 for details of these, see Dowty, Wall and Peters (1981) .Montague's semantic rules correspond to what we have been calling semantic interpretation. That is, in conjunction with a syntactic process, they produce a semantic representation, or translation, of a sentence.There are four important properties of Montague semantics that we will examine here.Below, we will carry three of these properties over into our own semantics.The first property, the one that we will later drop, is that for Montague, semantic objects, the results of the semantic translation, were such things as individual concepts (which are functions to individuals from the cartesian product of points in time and possible worlds), properties of individual concepts, and functions of functions of functions of functions. At the top level, the meaning, of a sentence was a truth condition relative to a possible world and point in time. These semantic objects were represented by expressions of intensional logic; that is, instead of translating English directly into these objects, a sentence was first translated to an expression of intensional logic, for which, in turn, there existed an interpretation in terms of these semantic objects.Second, Montague had a strong theory of types for his semantic objects: a set of types that corresponded to types of syntactic constituents. Thus, given a particular syntactic category, such as proper noun or adverb, Montague was able to say that the meaning of a constituent of that category was a semantic object of such and such a type. 4 Montague's system of types was recursively defined, with entities, truth values and intensions as primitives, and other types defined as functions from one type to another in such a manner that if syntactic category X was formed by adding category Y to category Z, then the type corresponding to g would be functions from senses of the type of 3That is, to ensure that "The temperature is ~0 and the temperature is rising* cannot lead to the inference that "90 is rising". Y to the type of X. The first alternative is that the meaning of the whole is a function of not just the parts but also the situation in which the sentence is uttered. For example, the possessive in English is highly dependent upon pragmatics; the phrase Nadia's penguin could refer, in different circumstances, to the penguin that Nadia owns, to the one that she is carrying but doesn't actually own, or to the one that she just bet on at the penguin races. Our definition above of semantic interpretation excluded this sort of consideration, but this should not be regarded as uncontroversial.The second alternative to compositional semantics is that the meaning of the whole is not a systematic function of the parts in any reasonable sense of the word. This is exemplified by the interpretation of the word depart in Woods's original system, which varied greatly depending on the preposition it dominated (Woods 1967 :A-43-A-46). For example, the interpretation of the sentence:AA-57 departs from Boston. That is, the semantic object into which depart is translated is the procedure depart. (AA-57 is an airline Right.) However, the addition of a prepositional phrase changes this; Table 1 shows the interpretation of the same sentence after wrious prepositional phrases have been appended. For example, the addition of ~o Chicago changes the translation of depart;to connect, though the intended sense of the word is clearly unchanged, s This is necessitated by the particular set of database primitives that Woods used, selected for their being %tom/c" (1967:7-4-7-11) rather than for promoting compositions/Sty. Rules in the system axe able to generate non-compositional representations because they have the power to set an arbitrarily complex parse tree as their trigger, and to return an axbitrary representation that could modify or completely ignore the components of the parse trees they are supposed to be interpreting/ For example, a rule can say (1967:A-44):If you have a sentence whose subject is a flight, whose verb is leave or depart, and which has two (or more) prepositional phrases modifying the verb, one with /from and a place name, the other with a~ and a time, then the interpretation is equal (dtime (a, b), c), where a is the flight, b is the place, and c is the time.Thus while Woods's semantics could probably be made • reasonably compositional simply by appropriate adjustment of the procedure calls into which sentences are translated, it would still not be compositional by design the way Montague semantics is.8~Ve have simplified a Little here in order to make our point. In fact, sentences like those in Table I with prepositional phrases will ~ctually cause the execution of two semantic rules: one for the complete sentence, and one for the sentence it happens to contain, A.A-57 depcrts from 8os~o~. The resulting interpretation will be the conjunction of the output from each rule (Woods 1967~9-5): AA-57 depLrts from Boston to Chicago.Woods leaves it open (1967:9-7) a,s to how the semantic redundancy in such expressions should be handled, thou~,h one of hie suggestions is a filter that would remove conjuncts implied by others, giving, in this case, the interpretation shown in Table 1. 7Nor is there &nything that prevents the construction of rules that would result in conjunctions with conflicting, rather than merely redund~tnt, terms. AA-57 departs from Boston at 8:00am. equal (dtlme (aa-5T. boston), 8:00am) AA-57 departs from Boston after 8:00am. greater (dtime (aa-5T, boston), 8:00am) A.A-57 departs from Boston before 8:00am. greater (8:00am, dtlme (aa-5T. boston)) Although Montague semantics has much to recommend it, it is not possible, ho~vever, to implement it directly in a practical NLU system, for two reasons. The first is that Montague semantics as currently formulated is computationally impractical. It throws around huge sets, infinite objects, functions of functions, and piles of possible worlds with great abandon. Friedman, Moran and Warren (1978a) point out that in the smallest possible Montague system, one with. two entities and two points of reference, there are, for example, 22"s= elements in the class of possible denotations of prepositions, each element being a set containing 2512 ordered pairs, sThe second reason we can't use Montague semantics directly is that truth-conditional semantics are not useful in AI; A/uses know/edge semant.ics (Tarnawksy 1982) in which semantic objects tend to be symbols or expressions in a declarative or procedural knowledge representation system. Moreover, truth-conditional semantics really only deals with declarative sentences (Dowry eC al 1981:13) (though there has been work attempting to extend Montague's work to questions; e.g. Hamblin 1973 ); a practical NLU system needs to be able to deal with commands and questions as well as declarative sentences.There have, however, been attempts to take the intensional logic that Montague uses as an intermediate step in his translations, and give it a new interpretation in terms of AI-type semantic objects, thus preserving all other aspects of Montague's approach; see, for example, Hobbs and Rosenschein 1977, and Smith's (1979) objections to their approach. There has also been interest in using the intensional logic itself (or something similar) as an AI representation ~ (e.g. Moore 1981 ). But while it may be possible to make limited use of intensional logic expressions, I° there are many problems that need to be solved before intensional logic or other flavors of logical forms could support the type of inference and problem solving that AI requires of its semantic representations; see Moore 1981 for a useful discussion. Moreover, Gallin (1975) has shown Montague's intensional logic to be incomplete. (See also the discussion in Section 7 of work using logical forms.)Nevertheless, it is possible to use many aspects of Montague's approach in semantics in AI. The semantic interpreter that we describe below maintains three of the four properties of Montague semantics that we described above, and we therefore refer to it as "Montague-inspired". Our semantic interpreter is a component of a system that uses a frame-like representation for both story comprehension and problem-solving. The system includes a frame language, named Frail, a problem solver, and a discourse pragmatics component; further details may be found in , Wong 1981a , and Wong 1981b . The natural language front-end includes Paragram, a deterministic parser based on that of Marcus (1980) . Unlike Marcus's parser, Paragram has two types of rule: base phrase structure rules and transformational rules. It is also able to parse ungrammatical sentences; it always uses the rule that matches best, even if none match exactly. Paragram is described in Charniak 1983. 91tonically, Montague regarded intensional logic merely as a convenience in specifyin K his translation, and one that was completely irrelevant to the substance of his semantic theories. lOGodden (1981) in f~ct uses them for simple translation between Thai and English. aThe queJtion-m~rk prefix indicates & variable. Whenever a free v~iable in a frame is bound to a v~iable in a frame determiner, a unique new name is generated for that variable and its bindings. In this paper, we shall assume for simplicity that vaxiable names ~re maKically ~correct" from the start.bDo not be misled by the fact that frames and frame determiners look similar. They He actually very different: the first is a gtatic data structure; the second is a frame retrieva~l procedure.CAn instance is the result of evaluating a frame statement in Frail. It is a symbol that denotes the object referenced by the frame statement. To Absity, there is no distinction between the two; ~n instan.ce can be used wherever ~ frame Itatement c~n.The semantic interpreter is named Absity (for reasons too obscure to burden the reader with). As we mentioned above, it retains three of the four properties of Montague semantics that we discussed. The property that we have dropped is, of course, truth conditionality and Montague's associated treasury of semantic objects. We have replaced them with AIstyle semantics, and our own repertory of objects, which are components of the frame language Frail. 11We do, however, retain a strong typing upon our semantic objects, that is, each syntactic category has an associated semantic type. Table 2 shows the types of components of Frail, how they may be combined, and examples of each; the nature of the components listed will become clearer with the examples in the next section. Table 3 gives the component of Frail that corresponds to each syntactic type. As a consequence of the kind of semantic objects we are dealing with, the system of types is not recursively defined in the Montague style, but we retain the idea that the type of a semantic object should be a function of the types of the components of that object.We have also carried over from Montague semantics the operation of syntactic and semantic rules in tandem upon corresponding objects. However, it is not possible to maintain the one-to-one correspondence of rules when we replace Montague's simple syntax with the much larger English grammar of the Paragram parser. This is because in Montague's system each syntactic rule either creates a new node from old ones-for example, forming an intransitive verb phrase from a transitive verb and a noun phrase--or places a new llAlthou~h the object that represents a Sentence is • procedure call in Frail upon a knowledge basej this is not procedur~l sem~ntics in the strict Woods sense, as the mes~aing inheres not in the procedures but in the objects they manipulate.node under an existing one--such as adding an adverb to an existing intransitive verb phrase. These are" actions that clearly have semantic counterparts. When we start to add movement rules such as passivizatioa and dative movement to the grammar, we find ourselves with rules that have no clear semantic counterpart; indeed with rules that, it is often claimed (e.g. Chomsky 1965:132) , leave the meaning of a sentence quite unchanged.We therefore distinguish between parser rules that should have corresponding semantic rules and those that should not. As the above discussion suggests, rules that attach nodes are the ones that have semantic counterparts. In Paragram, these are the base structure rules. For this subset of the syntactic rules, semantic rules run in tandem, just as in Montague's semantics, m It is a consequence of the above properties of our semantic interpreter that we have also retained the property of compositionaiity by design. This follows from the uniform typing; the correspondence between syntactic and semantic rules that maintains this uniformity; and there being a unique semantic object corresponding to each word of English i~ (see Dowty e~ al 1981:180-181) . Unlike those of Woods's (1967) airline reservation system front-end discussed in Section 2, our semantic rules are very weak: they cannot change or ignore the components upon which they operate, nor can more than one rule volunteer an interpretation for any node of the parse tree. The power of the system comes from the nature of the semantic objects and the syntax-directed application of semantic rules, rather than from the semantic rules themselves.
As we mentioned earlier, any parser will occasionally need semantic help. In Marcus-type parsers, this need occurs in rules that have the form "If semantics prefers 14Note ~hat it is the responsibility" of the frame system to determine with the help of the pragmatics module which one of the books that it m~ty know about is the correct one in context. One problem that Montague semantics does not address is that of word disambiguation. Rather, there is assumed to exist a function that maps each word to a unique sense, and the semantic formalism operates on the values of this function.Is Clearly, however, a practical NLU system must take account of word sense ambiguity, and so we must add a disambiguation facility to our interpreter. Fortunately, the word translation function allows us to ~dd this facility transparently.Instead of simply mapping a word to an invariant unique sense, the function can map it to whatever sense is correct for a particular instance.Our disambiguation facility is called Polaroid Words. Is Each word in the system is represented by 16polaroid is a trademark of the Polaroid Corporation. SUBJ Nadia (agent,= (the ?x (thlng ?x (propername="Nadla"))))OSJ the book (patlenl;=(the ?y (book ?y))) in the mall (loca~lon:C1;he ?~ (mall ?w))) a store in the mall (a ?z (s~core ?z (loca~ion=C~he ?w (mall ?w))))) from a store in the mall (source=Ca ?z (s~ore ?z (locatlon=(the ?w (mall ?W))))))NaSa bought the book from a storein the mall (buy ?u (agent=(the ?x (thlng ?x (propername="Sadia")))) (patient=(the ?y (book ?y))) (source=(a ?z (store ?z (location=(the ?w (m~ll ?w)))))))Nadia bought the book from a store in the mail. (a ?u (buy ?u (agenr,=(the ?x (thing ?x (propername=" N adla" ) ) ) ) (patient= (the ?y (book ?y))) (source=(a ?z (store ?z (locatlon=(1;he ?w (marl ?w))))))) a separate process that, by talking to other processes and by looking at paths made by spreading activation in the knowledge base, figures out the word's meaning. Each word is like a self-developing photograph that can be manipulated by the semantic interpreter even while the picture is forming; and if some other process needs to look at the picture (e.g. if the Semantic Enquiry Desk has an "if semantics prefers ~ question from the parser), then a half-developed picture may provide enough information. Exactly the same process, without the spreading-activation phase, is used to disambiguate case roles as well. Polaroid Words are described more fully in Hirst and Charniak 1982 and Hirst 1983. 7 . Comparison with other workOur approach to semantic interpretation may usefully be compared with other recent work with similar goals to ours. One such project is that of Jones and Warren (1982) , who attempt a conciliation between Montague semantics and a conceptual dependency representation (Schank 1975) . Their approach is to modify Montague's translation from English to intensional logic so that the resulting expressions have a canonical interpretation in conceptual dependency.They do not address such issues as extending Montague's syntax, nor whether their approach can be extended to deal with more modern Schankian representations (e.g. Schank 1982 ). Nevertheless, their work, which they describe as a hesitant first step, is similar in spirit to ours, and it will be interesting to see how it develops.Important recent work that extends the syntactic complexity of Montague's work is that on generalized phrase structure grammar (GPSG) (Gazdar 1982) . Such grammars combine a complex transformationfree syntax with Montague's semantics, the rules again operating in tandem. Gawron et al (1982) have implemented a database interface based on GFSG. In their system, the intensional logic of the semantic component is replaced by a simplified extensional logic, which, in turn, is translated into a query for database access. Schubert and Peiletier (1982) have also sought to simplify the semantic output of a GPSG to a more ~conventional" logical form; and Rosenschein and Shieber (1982) describe a similar translation process into extensional logical forms, using a context-free grammar intended to be similar to a GPSG. Iv The GPSG approaches differ from ours in that their output is a logical form rather than an immediate representation of a semantic object; that is, the output is not tied to any representation of knowledge. In Gawron et al's system, the database provides an interpretation of the logical form, but only in a weak sense, as the form must first pass through another (apparently somewhat ad hoc) translation and disambiguati0n process. Nor do these approaches provide any semantic feedback to the parset. is These differences, however, are independent of the choice of GPSG; it should be easy, at least in principle, to modify these approaches to give Frail output, or, conversely, to replace Paragram in our system with a GPSG parser. 19The PSX-KLON~-system of Webber (1980a, 1980b) also has a close coupling between syntax and semantics. Rather than operating in tandem, though, the two are described as "cascaded', with an ATN parser handing constituents to a semantic interpreter, which is allowed to return them (causing the ATN to back up) if the purser's choice is found to be semantically untenable. Otherwise, a process of incremental description refinement is used to interpret the constituent; this relies on the fact that the syntactic constituents are represented in the same formalism, KL-OSZ (Brachman 1978) , as the system's knowledge base. The semantic interpreter uses projection rules to form an interpretation in a language called JAaGON, which is then translated into KL-ONZ. Bobrow and Webber are particularly concerned with using this framework to determine the combinatoric relationship between quantifiers in a sentence.Bobrow and Webber's approach addresses several of the issues that we do, in particular the relationship between syntax and semantics. The information feedback to the parser is similar to our Semantic Enquiry Desk, though in our system, because the parser is deterministic, semantic feedback cannot be con fluted with syntactic success or failure. Both approaches rely on the fact that the objects manipulated are objects of a knowledge representation that permits appropriate judgments to be made, though in rather a different manner. Hendler and Phillips (1981; Phillips and Hendler 1982) have implemented a control structure for NLU based on message passing, with the goal of running syntax and semantics in parallel and providing semantic feedback to the parser. A ~moderator" translates between syntactic constructs and semantic representations. However, their approach to interpretation is essentially ad hoc (James Hendler, persoaoi cummunication), and they do not attempt to put syntactic and semantic rules in strict correspondence, nor type their semantic objects.None of the work mentioned above addresses issues of lexical ambiguity as ours does, though Bobrow and Webber's incremental description refinement could possibly be extended to cover it. Also, Gawron et al have a process to disambiguate case roles in the logical form after it is complete, which operates in a manner not dissimilar to the case-slot part of Polaroid Words.8Despite this problem,Friedman et ¢I (1978bFriedman et ¢I ( , 1978c have implemented Mont~gue semantics computationally by using techn/ques for maintaining partially specified models. However, their system is intended ~s ~ tool for understanding Montague semantics better, r~ther than &s ~ usable NLU system (1978b:26).Rosenschein and Shieber's semaxltic translation fonow~ parsing rather than running in parallel with it, but it iv strongly syntax-dLrected, and is, it seems, isomorphic to ~n in-t~ndem translation that provides no feedback to the p~rser.
Some examples will make our semantic interpreter clearer. First, let's consider a simple noun phrase, the book. From Table 3 , the semantic type for the determiner She is a frame determiner function, in this case (the ?x), and the type for the noun book is a kind of frame, here (book ?x). These are combined 12In her synthesis of transformationa.l syntax with Monta6,ue acrostics, Partee (1973, 1975) observes that the semantic rule corresponding to many transformations will simply be the identity mapping.13We show in Section 6 how this may be reconciled with lexical ambiguity.in the canonical way--the frame name is added as an argument to the frame determiner function--and the result, (the ?x (book ?x)), is a Frail frame statement (which evaluates to an instance) that represents the unique book referred to. 14 A descriptive adjective corresponds to a slot-filler pair; for example, red is represented by (color=red), where color is the name of a slot and red is a frame instance, the name of a frame. A slot-filler pair can be added as an argument to a frame, so the red book would have the semantic interpretation (the ?x (book ?x (color=red))).Now let's consider a complete sentence:Nadia bought the book from a store in the mall. each word is unambiguous (we discuss our disambiguation procedures in Section 6); we also ignore the tense cn the verb. Table 5 shows the next four stages in the interpretation. First, noun phrases and their prepositions are combined, forming slot-filler pairs. Then the prepositional phrase in the mall can be attached to a store (since a noun phrase, being a frame, can have a slot-filler pair added to it), and the prepositional phrase from a store in the marl is formed. The third stage shown in the Table is the attachment of the slotfiller pairs for the three top-level prepositional phrases to the frame representing the verb. Finally, the period, which is translated as a frame determiner function, causes instantiation of the buy frame, and the translation is complete.
null
We have described a new approach to semantic interpretation, one suggested by the semantic formalism of Richard Montague. We believe this work to be a clean and elegant foundation for semantic interpretation, in contrast to previous ad hoc approaches. At the moment, though, the work is only a foundation; the test of a foundation is what can be constructed on top of it. We do not expect the construction to be unproblematic; here are some of the problems we will have to solve.First, the approach is not just compositional but almost too compositional. At present, noun phrases are taken to be invariably and unalterably specific and extensional, that is to imply the existence of the unique entity or set of entities that they specify. In English, this is not always correct. A sentence such as:Nadia owns a unicorn.implies that a unicorn exists, but this is not true of:Nadia talked abou~ a unicorn.which also has a non-specific reading. Montague's solution to this problem does not seem easily adaptable to Absity. 2° Similarly, a sentence such as:The lion is not a beast to be trifled w/th. can be a generic statement intended to be true of all lions; Montague did not treat generics.Second, the approach is heavily dependent upon the expressive power of the underlying frame language. For example, our language, Frail, is yet deficient in its handling of time, and this is clearly reflected in Absity. Further, the approach makes certain claims about the nature of frame representations~that a descriptive adjective in some sense is a slot-filler pair, for example that might be shown to be untenable.We will also have to deal with problems in quantification, anaphoric reference, and many other areas. Nevertheless, we believe that this approach to semantic interpretation shows considerable promise.
Main paper: montague semantics: In his well-known "PTQ" paper (Montague 1973) , Richard Montague presented the complete syntax and semantics for a small fragment of English. Although it was limited in vocabulary and syntactic complexity, Montague's fragment dealt with such imporlit is also philosophically controversial. For discussion, see Fodor 1978 , Johnson-Laird 1978 , Fodor 1979 , and Wilks 1982 is not the only current work with this Ko~tl; in Section 7 we discuse other similarly motivated work, tant semantic problems as opaque contexts, different types of predication with the word be, and the "the temperature is 90" problem; 3 for details of these, see Dowty, Wall and Peters (1981) .Montague's semantic rules correspond to what we have been calling semantic interpretation. That is, in conjunction with a syntactic process, they produce a semantic representation, or translation, of a sentence.There are four important properties of Montague semantics that we will examine here.Below, we will carry three of these properties over into our own semantics.The first property, the one that we will later drop, is that for Montague, semantic objects, the results of the semantic translation, were such things as individual concepts (which are functions to individuals from the cartesian product of points in time and possible worlds), properties of individual concepts, and functions of functions of functions of functions. At the top level, the meaning, of a sentence was a truth condition relative to a possible world and point in time. These semantic objects were represented by expressions of intensional logic; that is, instead of translating English directly into these objects, a sentence was first translated to an expression of intensional logic, for which, in turn, there existed an interpretation in terms of these semantic objects.Second, Montague had a strong theory of types for his semantic objects: a set of types that corresponded to types of syntactic constituents. Thus, given a particular syntactic category, such as proper noun or adverb, Montague was able to say that the meaning of a constituent of that category was a semantic object of such and such a type. 4 Montague's system of types was recursively defined, with entities, truth values and intensions as primitives, and other types defined as functions from one type to another in such a manner that if syntactic category X was formed by adding category Y to category Z, then the type corresponding to g would be functions from senses of the type of 3That is, to ensure that "The temperature is ~0 and the temperature is rising* cannot lead to the inference that "90 is rising". Y to the type of X. The first alternative is that the meaning of the whole is a function of not just the parts but also the situation in which the sentence is uttered. For example, the possessive in English is highly dependent upon pragmatics; the phrase Nadia's penguin could refer, in different circumstances, to the penguin that Nadia owns, to the one that she is carrying but doesn't actually own, or to the one that she just bet on at the penguin races. Our definition above of semantic interpretation excluded this sort of consideration, but this should not be regarded as uncontroversial.The second alternative to compositional semantics is that the meaning of the whole is not a systematic function of the parts in any reasonable sense of the word. This is exemplified by the interpretation of the word depart in Woods's original system, which varied greatly depending on the preposition it dominated (Woods 1967 :A-43-A-46). For example, the interpretation of the sentence:AA-57 departs from Boston. That is, the semantic object into which depart is translated is the procedure depart. (AA-57 is an airline Right.) However, the addition of a prepositional phrase changes this; Table 1 shows the interpretation of the same sentence after wrious prepositional phrases have been appended. For example, the addition of ~o Chicago changes the translation of depart;to connect, though the intended sense of the word is clearly unchanged, s This is necessitated by the particular set of database primitives that Woods used, selected for their being %tom/c" (1967:7-4-7-11) rather than for promoting compositions/Sty. Rules in the system axe able to generate non-compositional representations because they have the power to set an arbitrarily complex parse tree as their trigger, and to return an axbitrary representation that could modify or completely ignore the components of the parse trees they are supposed to be interpreting/ For example, a rule can say (1967:A-44):If you have a sentence whose subject is a flight, whose verb is leave or depart, and which has two (or more) prepositional phrases modifying the verb, one with /from and a place name, the other with a~ and a time, then the interpretation is equal (dtime (a, b), c), where a is the flight, b is the place, and c is the time.Thus while Woods's semantics could probably be made • reasonably compositional simply by appropriate adjustment of the procedure calls into which sentences are translated, it would still not be compositional by design the way Montague semantics is.8~Ve have simplified a Little here in order to make our point. In fact, sentences like those in Table I with prepositional phrases will ~ctually cause the execution of two semantic rules: one for the complete sentence, and one for the sentence it happens to contain, A.A-57 depcrts from 8os~o~. The resulting interpretation will be the conjunction of the output from each rule (Woods 1967~9-5): AA-57 depLrts from Boston to Chicago.Woods leaves it open (1967:9-7) a,s to how the semantic redundancy in such expressions should be handled, thou~,h one of hie suggestions is a filter that would remove conjuncts implied by others, giving, in this case, the interpretation shown in Table 1. 7Nor is there &nything that prevents the construction of rules that would result in conjunctions with conflicting, rather than merely redund~tnt, terms. AA-57 departs from Boston at 8:00am. equal (dtlme (aa-5T. boston), 8:00am) AA-57 departs from Boston after 8:00am. greater (dtime (aa-5T, boston), 8:00am) A.A-57 departs from Boston before 8:00am. greater (8:00am, dtlme (aa-5T. boston)) Although Montague semantics has much to recommend it, it is not possible, ho~vever, to implement it directly in a practical NLU system, for two reasons. The first is that Montague semantics as currently formulated is computationally impractical. It throws around huge sets, infinite objects, functions of functions, and piles of possible worlds with great abandon. Friedman, Moran and Warren (1978a) point out that in the smallest possible Montague system, one with. two entities and two points of reference, there are, for example, 22"s= elements in the class of possible denotations of prepositions, each element being a set containing 2512 ordered pairs, sThe second reason we can't use Montague semantics directly is that truth-conditional semantics are not useful in AI; A/uses know/edge semant.ics (Tarnawksy 1982) in which semantic objects tend to be symbols or expressions in a declarative or procedural knowledge representation system. Moreover, truth-conditional semantics really only deals with declarative sentences (Dowry eC al 1981:13) (though there has been work attempting to extend Montague's work to questions; e.g. Hamblin 1973 ); a practical NLU system needs to be able to deal with commands and questions as well as declarative sentences.There have, however, been attempts to take the intensional logic that Montague uses as an intermediate step in his translations, and give it a new interpretation in terms of AI-type semantic objects, thus preserving all other aspects of Montague's approach; see, for example, Hobbs and Rosenschein 1977, and Smith's (1979) objections to their approach. There has also been interest in using the intensional logic itself (or something similar) as an AI representation ~ (e.g. Moore 1981 ). But while it may be possible to make limited use of intensional logic expressions, I° there are many problems that need to be solved before intensional logic or other flavors of logical forms could support the type of inference and problem solving that AI requires of its semantic representations; see Moore 1981 for a useful discussion. Moreover, Gallin (1975) has shown Montague's intensional logic to be incomplete. (See also the discussion in Section 7 of work using logical forms.)Nevertheless, it is possible to use many aspects of Montague's approach in semantics in AI. The semantic interpreter that we describe below maintains three of the four properties of Montague semantics that we described above, and we therefore refer to it as "Montague-inspired". Our semantic interpreter is a component of a system that uses a frame-like representation for both story comprehension and problem-solving. The system includes a frame language, named Frail, a problem solver, and a discourse pragmatics component; further details may be found in , Wong 1981a , and Wong 1981b . The natural language front-end includes Paragram, a deterministic parser based on that of Marcus (1980) . Unlike Marcus's parser, Paragram has two types of rule: base phrase structure rules and transformational rules. It is also able to parse ungrammatical sentences; it always uses the rule that matches best, even if none match exactly. Paragram is described in Charniak 1983. 91tonically, Montague regarded intensional logic merely as a convenience in specifyin K his translation, and one that was completely irrelevant to the substance of his semantic theories. lOGodden (1981) in f~ct uses them for simple translation between Thai and English. aThe queJtion-m~rk prefix indicates & variable. Whenever a free v~iable in a frame is bound to a v~iable in a frame determiner, a unique new name is generated for that variable and its bindings. In this paper, we shall assume for simplicity that vaxiable names ~re maKically ~correct" from the start.bDo not be misled by the fact that frames and frame determiners look similar. They He actually very different: the first is a gtatic data structure; the second is a frame retrieva~l procedure.CAn instance is the result of evaluating a frame statement in Frail. It is a symbol that denotes the object referenced by the frame statement. To Absity, there is no distinction between the two; ~n instan.ce can be used wherever ~ frame Itatement c~n.The semantic interpreter is named Absity (for reasons too obscure to burden the reader with). As we mentioned above, it retains three of the four properties of Montague semantics that we discussed. The property that we have dropped is, of course, truth conditionality and Montague's associated treasury of semantic objects. We have replaced them with AIstyle semantics, and our own repertory of objects, which are components of the frame language Frail. 11We do, however, retain a strong typing upon our semantic objects, that is, each syntactic category has an associated semantic type. Table 2 shows the types of components of Frail, how they may be combined, and examples of each; the nature of the components listed will become clearer with the examples in the next section. Table 3 gives the component of Frail that corresponds to each syntactic type. As a consequence of the kind of semantic objects we are dealing with, the system of types is not recursively defined in the Montague style, but we retain the idea that the type of a semantic object should be a function of the types of the components of that object.We have also carried over from Montague semantics the operation of syntactic and semantic rules in tandem upon corresponding objects. However, it is not possible to maintain the one-to-one correspondence of rules when we replace Montague's simple syntax with the much larger English grammar of the Paragram parser. This is because in Montague's system each syntactic rule either creates a new node from old ones-for example, forming an intransitive verb phrase from a transitive verb and a noun phrase--or places a new llAlthou~h the object that represents a Sentence is • procedure call in Frail upon a knowledge basej this is not procedur~l sem~ntics in the strict Woods sense, as the mes~aing inheres not in the procedures but in the objects they manipulate.node under an existing one--such as adding an adverb to an existing intransitive verb phrase. These are" actions that clearly have semantic counterparts. When we start to add movement rules such as passivizatioa and dative movement to the grammar, we find ourselves with rules that have no clear semantic counterpart; indeed with rules that, it is often claimed (e.g. Chomsky 1965:132) , leave the meaning of a sentence quite unchanged.We therefore distinguish between parser rules that should have corresponding semantic rules and those that should not. As the above discussion suggests, rules that attach nodes are the ones that have semantic counterparts. In Paragram, these are the base structure rules. For this subset of the syntactic rules, semantic rules run in tandem, just as in Montague's semantics, m It is a consequence of the above properties of our semantic interpreter that we have also retained the property of compositionaiity by design. This follows from the uniform typing; the correspondence between syntactic and semantic rules that maintains this uniformity; and there being a unique semantic object corresponding to each word of English i~ (see Dowty e~ al 1981:180-181) . Unlike those of Woods's (1967) airline reservation system front-end discussed in Section 2, our semantic rules are very weak: they cannot change or ignore the components upon which they operate, nor can more than one rule volunteer an interpretation for any node of the parse tree. The power of the system comes from the nature of the semantic objects and the syntax-directed application of semantic rules, rather than from the semantic rules themselves. examples: Some examples will make our semantic interpreter clearer. First, let's consider a simple noun phrase, the book. From Table 3 , the semantic type for the determiner She is a frame determiner function, in this case (the ?x), and the type for the noun book is a kind of frame, here (book ?x). These are combined 12In her synthesis of transformationa.l syntax with Monta6,ue acrostics, Partee (1973, 1975) observes that the semantic rule corresponding to many transformations will simply be the identity mapping.13We show in Section 6 how this may be reconciled with lexical ambiguity.in the canonical way--the frame name is added as an argument to the frame determiner function--and the result, (the ?x (book ?x)), is a Frail frame statement (which evaluates to an instance) that represents the unique book referred to. 14 A descriptive adjective corresponds to a slot-filler pair; for example, red is represented by (color=red), where color is the name of a slot and red is a frame instance, the name of a frame. A slot-filler pair can be added as an argument to a frame, so the red book would have the semantic interpretation (the ?x (book ?x (color=red))).Now let's consider a complete sentence:Nadia bought the book from a store in the mall. each word is unambiguous (we discuss our disambiguation procedures in Section 6); we also ignore the tense cn the verb. Table 5 shows the next four stages in the interpretation. First, noun phrases and their prepositions are combined, forming slot-filler pairs. Then the prepositional phrase in the mall can be attached to a store (since a noun phrase, being a frame, can have a slot-filler pair added to it), and the prepositional phrase from a store in the marl is formed. The third stage shown in the Table is the attachment of the slotfiller pairs for the three top-level prepositional phrases to the frame representing the verb. Finally, the period, which is translated as a frame determiner function, causes instantiation of the buy frame, and the translation is complete. semantic help for the parser: As we mentioned earlier, any parser will occasionally need semantic help. In Marcus-type parsers, this need occurs in rules that have the form "If semantics prefers 14Note ~hat it is the responsibility" of the frame system to determine with the help of the pragmatics module which one of the books that it m~ty know about is the correct one in context. word sense disambiguation: One problem that Montague semantics does not address is that of word disambiguation. Rather, there is assumed to exist a function that maps each word to a unique sense, and the semantic formalism operates on the values of this function.Is Clearly, however, a practical NLU system must take account of word sense ambiguity, and so we must add a disambiguation facility to our interpreter. Fortunately, the word translation function allows us to ~dd this facility transparently.Instead of simply mapping a word to an invariant unique sense, the function can map it to whatever sense is correct for a particular instance.Our disambiguation facility is called Polaroid Words. Is Each word in the system is represented by 16polaroid is a trademark of the Polaroid Corporation. SUBJ Nadia (agent,= (the ?x (thlng ?x (propername="Nadla"))))OSJ the book (patlenl;=(the ?y (book ?y))) in the mall (loca~lon:C1;he ?~ (mall ?w))) a store in the mall (a ?z (s~core ?z (loca~ion=C~he ?w (mall ?w))))) from a store in the mall (source=Ca ?z (s~ore ?z (locatlon=(the ?w (mall ?W))))))NaSa bought the book from a storein the mall (buy ?u (agent=(the ?x (thlng ?x (propername="Sadia")))) (patient=(the ?y (book ?y))) (source=(a ?z (store ?z (location=(the ?w (m~ll ?w)))))))Nadia bought the book from a store in the mail. (a ?u (buy ?u (agenr,=(the ?x (thing ?x (propername=" N adla" ) ) ) ) (patient= (the ?y (book ?y))) (source=(a ?z (store ?z (locatlon=(1;he ?w (marl ?w))))))) a separate process that, by talking to other processes and by looking at paths made by spreading activation in the knowledge base, figures out the word's meaning. Each word is like a self-developing photograph that can be manipulated by the semantic interpreter even while the picture is forming; and if some other process needs to look at the picture (e.g. if the Semantic Enquiry Desk has an "if semantics prefers ~ question from the parser), then a half-developed picture may provide enough information. Exactly the same process, without the spreading-activation phase, is used to disambiguate case roles as well. Polaroid Words are described more fully in Hirst and Charniak 1982 and Hirst 1983. 7 . Comparison with other workOur approach to semantic interpretation may usefully be compared with other recent work with similar goals to ours. One such project is that of Jones and Warren (1982) , who attempt a conciliation between Montague semantics and a conceptual dependency representation (Schank 1975) . Their approach is to modify Montague's translation from English to intensional logic so that the resulting expressions have a canonical interpretation in conceptual dependency.They do not address such issues as extending Montague's syntax, nor whether their approach can be extended to deal with more modern Schankian representations (e.g. Schank 1982 ). Nevertheless, their work, which they describe as a hesitant first step, is similar in spirit to ours, and it will be interesting to see how it develops.Important recent work that extends the syntactic complexity of Montague's work is that on generalized phrase structure grammar (GPSG) (Gazdar 1982) . Such grammars combine a complex transformationfree syntax with Montague's semantics, the rules again operating in tandem. Gawron et al (1982) have implemented a database interface based on GFSG. In their system, the intensional logic of the semantic component is replaced by a simplified extensional logic, which, in turn, is translated into a query for database access. Schubert and Peiletier (1982) have also sought to simplify the semantic output of a GPSG to a more ~conventional" logical form; and Rosenschein and Shieber (1982) describe a similar translation process into extensional logical forms, using a context-free grammar intended to be similar to a GPSG. Iv The GPSG approaches differ from ours in that their output is a logical form rather than an immediate representation of a semantic object; that is, the output is not tied to any representation of knowledge. In Gawron et al's system, the database provides an interpretation of the logical form, but only in a weak sense, as the form must first pass through another (apparently somewhat ad hoc) translation and disambiguati0n process. Nor do these approaches provide any semantic feedback to the parset. is These differences, however, are independent of the choice of GPSG; it should be easy, at least in principle, to modify these approaches to give Frail output, or, conversely, to replace Paragram in our system with a GPSG parser. 19The PSX-KLON~-system of Webber (1980a, 1980b) also has a close coupling between syntax and semantics. Rather than operating in tandem, though, the two are described as "cascaded', with an ATN parser handing constituents to a semantic interpreter, which is allowed to return them (causing the ATN to back up) if the purser's choice is found to be semantically untenable. Otherwise, a process of incremental description refinement is used to interpret the constituent; this relies on the fact that the syntactic constituents are represented in the same formalism, KL-OSZ (Brachman 1978) , as the system's knowledge base. The semantic interpreter uses projection rules to form an interpretation in a language called JAaGON, which is then translated into KL-ONZ. Bobrow and Webber are particularly concerned with using this framework to determine the combinatoric relationship between quantifiers in a sentence.Bobrow and Webber's approach addresses several of the issues that we do, in particular the relationship between syntax and semantics. The information feedback to the parser is similar to our Semantic Enquiry Desk, though in our system, because the parser is deterministic, semantic feedback cannot be con fluted with syntactic success or failure. Both approaches rely on the fact that the objects manipulated are objects of a knowledge representation that permits appropriate judgments to be made, though in rather a different manner. Hendler and Phillips (1981; Phillips and Hendler 1982) have implemented a control structure for NLU based on message passing, with the goal of running syntax and semantics in parallel and providing semantic feedback to the parser. A ~moderator" translates between syntactic constructs and semantic representations. However, their approach to interpretation is essentially ad hoc (James Hendler, persoaoi cummunication), and they do not attempt to put syntactic and semantic rules in strict correspondence, nor type their semantic objects.None of the work mentioned above addresses issues of lexical ambiguity as ours does, though Bobrow and Webber's incremental description refinement could possibly be extended to cover it. Also, Gawron et al have a process to disambiguate case roles in the logical form after it is complete, which operates in a manner not dissimilar to the case-slot part of Polaroid Words. conclusion: We have described a new approach to semantic interpretation, one suggested by the semantic formalism of Richard Montague. We believe this work to be a clean and elegant foundation for semantic interpretation, in contrast to previous ad hoc approaches. At the moment, though, the work is only a foundation; the test of a foundation is what can be constructed on top of it. We do not expect the construction to be unproblematic; here are some of the problems we will have to solve.First, the approach is not just compositional but almost too compositional. At present, noun phrases are taken to be invariably and unalterably specific and extensional, that is to imply the existence of the unique entity or set of entities that they specify. In English, this is not always correct. A sentence such as:Nadia owns a unicorn.implies that a unicorn exists, but this is not true of:Nadia talked abou~ a unicorn.which also has a non-specific reading. Montague's solution to this problem does not seem easily adaptable to Absity. 2° Similarly, a sentence such as:The lion is not a beast to be trifled w/th. can be a generic statement intended to be true of all lions; Montague did not treat generics.Second, the approach is heavily dependent upon the expressive power of the underlying frame language. For example, our language, Frail, is yet deficient in its handling of time, and this is clearly reflected in Absity. Further, the approach makes certain claims about the nature of frame representations~that a descriptive adjective in some sense is a slot-filler pair, for example that might be shown to be untenable.We will also have to deal with problems in quantification, anaphoric reference, and many other areas. Nevertheless, we believe that this approach to semantic interpretation shows considerable promise. i. introduction: By semantic interpretation we mean the process of mapping from a syntactically analyzed sentence of natural language to a representation of its meaning. We exclude from semantic interpretation any consideration of discourse pragmatics; rather, discourse pragmatics operate upon the output of the semantic interpreter. We also exclude syntactic analysis; the integration of syntactic and semantic analysis becomes very messy when complex syntactic constructions are considered, and, moreover, it is our observation that those who argue for the integration of the two are usually arguing for subordinating the role of syntax, a position we reject. This is not to say that parsing can get by without semantic help; indirect object finding, and prepositional phrase and relative clause attachment, for example, often require semantic knowledge.Below we will show that syntax and semantics may work well together while remaining distinct modules.Research on semantic interpretation in artificial intelligence goes back to Woods's dissertation (1967 Woods's dissertation ( , 1968 , which introduced procedural semantics in a natural-language front-end for an airline reservation system. Woods's system had rules with patterns that, when they matched part of the parsed input sentence, contributed a string to the semantic representation of the sentence. This string was usually constructed from the terminals of the matched parse tree fragment. The strings were combined to form a procedure call that, when evaluated, entered or retrieved the appropriate database information. This approach is still the predominant one today, and even though it has been refined over the years, semantic interpretation remains perhaps the least understood and most ad hoc area of natural language understanding (NLU).I However, recent advances in linguistics, most notably Montague semantics (Montague 1973; Dowry, Wall and Peters 1981) , suggest ways of putting NLU semantic interpretation on a cleaner and firmer foundation than it now is. In this paper, we describe such a foundation. 2 Appendix: 8Despite this problem,Friedman et ¢I (1978bFriedman et ¢I ( , 1978c have implemented Mont~gue semantics computationally by using techn/ques for maintaining partially specified models. However, their system is intended ~s ~ tool for understanding Montague semantics better, r~ther than &s ~ usable NLU system (1978b:26).Rosenschein and Shieber's semaxltic translation fonow~ parsing rather than running in parallel with it, but it iv strongly syntax-dLrected, and is, it seems, isomorphic to ~n in-t~ndem translation that provides no feedback to the p~rser.
null
null
null
null
{ "paperhash": [ "jones|conceptual_dependency_and_montague_grammar:_a_step_toward_conciliation", "hirst|word_sense_and_case_slot_disambiguation", "phillips|a_message-passing_control_structure_for_text_understanding", "rosenschein|translating_english_into_logical_form", "gawron|processing_english_with_a_generalized_phrase_structure_grammar", "wong|language_comprehension_in_a_problem_solver", "moore|problems_in_logical_form", "bobrow|knowledge_representation_for_syntactic/semantic_processing", "schank|reminding_and_memory_organization:_an_introduction_to_mops.", "marcus|a_theory_of_syntactic_recognition_for_natural_language", "brachman|a_structural_paradigm_for_representing_knowledge.", "gallin|intensional_and_higher-order_modal_logic,_with_applications_to_montague_semantics", "woods|procedural_semantics_for_a_question-answering_machine", "schubert|from_english_to_logic:_context-free_computation_of_‘conventional’_logical_translation", "lehnert|strategies_for_natural_language_processing", "wong|on_the_unification_of_language_comprehension_with_problem_solving" ], "title": [ "Conceptual Dependency and Montague Grammar: A Step Toward Conciliation", "Word Sense and Case Slot Disambiguation", "A Message-Passing Control Structure for Text Understanding", "Translating English Into Logical Form", "Processing English With a Generalized Phrase Structure Grammar", "Language Comprehension in a Problem Solver", "Problems in Logical Form", "Knowledge Representation for Syntactic/Semantic Processing", "Reminding and Memory Organization: An Introduction to MOPs.", "A theory of syntactic recognition for natural language", "A Structural Paradigm for Representing Knowledge.", "Intensional and Higher-Order Modal Logic, With Applications to Montague Semantics", "Procedural semantics for a question-answering machine", "From English to Logic: Context-Free Computation of ‘Conventional’ Logical Translation", "Strategies for Natural Language Processing", "On the unification of language comprehension with problem solving" ], "abstract": [ "In attempting to establish a common basis from which the approaches and results can be compared, we have taken a conciliatory attitude toward natural language research in the conceptual dependency (CD) paradigm and Montague Grammar (MG) formalism. Although these two approaches may seem to be strange bedfellows indeed with often noticeably different perspectives, we have observed many commonalities. We begin with a brief description of the problem view and ontology of each and then create a formulation of CD as logic. We then give \"conceptual\" MG translations for the words in an example sentence which we use in approximating a word-based parsing style. Finally, we make some suggestions regarding further extensions of logic to introduce higher level representations.", "The tasks of disambiguating words and determining case are similar and can usefully be combined. We present two cooperating mechanisms that each work on both tasks: MARKER PASSING finds connections between concepts in a system of frames, and POLAROID WORDS provide a protocol for negotiation between ambiguous words and cases. Examples of each in action are given. The cooperating mechanisms allow linguistic and world knowledge to be unified, frequently eliminate the need to use inference in disambiguation, and provide a usefully constrained model of disambiguation.", "This paper describes an object-oriented, message-passing system for natural language text understanding. The application domain is the texts of Texas Instruments' patent descriptions. The object-oriented environment permits syntactic analysis modules to communicate with domain knowledge modules to resolve ambiguities as they arise.", "A scheme for syntax-directed translation that mirrors compositional model-theoretic semantics is discussed. The scheme is the basis for an English translation system called PATR and was used to specify a semantically interesting fragment of English, including such constructs as tense, aspect, modals, and various lexically controlled verb complement structures. PATR was embedded in a question-answering system that replied appropriately to questions requiring the computation of logical entailments.", "This paper describes a natural language processing system implemented at Hewlett-Packard's Computer Research Center. The system's main components are: a Generalized Phrase Structure Grammar (GPSG); a top-down parser; a logic transducer that outputs a first-order logical representation; and a \"disambiguator\" that uses sortal information to convert \"normal-form\" first-order logical expressions into the query language for HIRE, a relational database hosted in the SPHERE system. We argue that theoretical developments in GPSG syntax and in Montague semantics have specific advantages to bring to this domain of computational linguistics. The syntax and semantics of the system are totally domain-independent, and thus, in principle, highly portable. We discuss the prospects for extending domain-independence to the lexical semantics as well, and thus to the logical semantic representations.", "This paper describes BRUIN, a unified A system that can perform both problem-solving and language comprehension tasks. Included in the system is a frame-based knowledge-representation language called FRAIL, a problem solving component called NASL (which is based on McDermott's problem-solving language of the same name), and a context-recognition component currently known as PRAGMATICS. \n \nThe intent of this paper is to give a flavor of how the context recognizer PRAGMATICS works and what it can do. Examples are drawn from the inventory-control, restaurant and blocks-world domains.", "Abstract : Most current theories of natural-language processing propose that the assimilation of an utterance involves producing an expression or structure that in some sense represents the literal meaning of the utterance. It is often maintained that understanding what an utterance literally means consists in being able to recover such a representation. In philosophy and linguistics this sort of representation is usually said to display the \"logical form\" of an utterance. This paper surveys some of the key problems that arise in defining a system of representation for the logical forms of English sentences and suggests possible approaches to their solution. The author first looks at some general issues relating to the notion of logical form, explaining why it makes sense to define such a notion only for sentences in context, not in isolation, and then discusses the relationship between research on logical form and work on knowledge representation in artificial intelligence. The rest of the paper is devoted to examining specific problems in logical form. These include the following: quantifiers; events, actions and processes; time and space; collective entities and substances; propositional attitudes and modalities; and questions and imperatives.", "This paper describes the RUS framework for natural language processing, in which a parser incorporating a substantial ATN grammar for English interacts with a semantic interpreter to simultaneously parse and interpret input. The structure of that interaction is discussed, including the roles played by syntactic and semantic knowledge. Several implementations of the RUS framework are currently in use, sharing the same grammar, but differing in the form of their semantic component. One of these, the PSI-KLONE system, is based on a general object-centered knowledge representation system, called KL-ONE. The operation of PSI-KLONE is described, including its use of KL-ONE to support a general inference process called \"incremental description refinement.\" The last section of the paper discusses several important criteria for knowledge representation systems to be used in syntactic and semantic processing.", "Abstract : The organization of human memory is a central problem in Cognitive Science. Recent experimental work by Bower, Black, and Turner (1979), concerning some of the memory structures proposed in Schank and Abelson (1977), uncovered certain recognition confusions not easily explained in terms of those structures. This paper presents some solutions to those problems, by hypothesizing a more fragmented, multi-level set of memory structures (MOPs), which are more flexible than scripts. These structures are also expected to account for other phenomena, particularly that of 'being reminded' of some previous situation by a current situation. (Author)", "Abstract : Assume that the syntax of natural language can be parsed by a left-to-right deterministic mechanism without facilities for parallelism or backup. It will be shown that this 'determinism' hypothesis, explored within the context of the grammar of English, leads to a simple mechanism, a grammar interpreter. (Author)", "Abstract : This report presents on associative network formalism for representing conceptual knowledge. While many similar formalisms have been developed since the introduction of the semantic network in 1966, they have often suffered from inconsistent interpretation of their links, lack of appropriate structure in their nodes, and general expressive inadequacy. In this paper, we take a detailed look at the history of these semantic nets and begin to understand their inadequacies by examining closely what their representational pieces have been intended to model. Based on this analysis, a new type of network is presented - the Structured Inheritance Network (SI-NET) - designed to circumvent common expressive shortcomings.", "intensional and higher order modal logic with applications intensional and higher order modal logic with applications intensional and higher order modal logic with applications intensional and higher order modal logic with applications 19 applications of modal logic in linguistics first-order intensional logic lehman college to appear in: encylopedia of language and linguistics, ed a note on higher order grammar arxiv saab 9 3 convertible repair manual 1999 ebook | browserfame what kind of intensional logic do we really want/need? montague's intensional logic as comonadic type theory a new english translation of the septuagint ebook boolean and modal algebras university of edinburgh mysql pocket reference sql statements functions and making sense of statisticsa conceptual overview ebook parenting for preventionhow to raise a child to say no to databases and higher types springer books and journals received link.springer formal semantics and pragmatics for natural language querying worlds: modal logic vs. two-sorted type theory hp 15c instruction manual whrose the unremembered part 2 ghosts from the past volume 2 ghani kashmirilife poems introduction to sufi poets series first-order classical eric pacuit modal logic intensional models for the theory of types arxiv:math china s urban space jbacs contemporary linguistics. from logic to language visiting pagan ethics paganism as a world religion ebook in re richard h ricuk hp officejet 7210 all in one manual whrose of men plants pelmax in nite-game semantics for logic programming languages severe injuries to the limbsstaged treatment ebook materials matter toward a sustainable materials policy a beautiful child dogolf intensional models for the theory of types citeseerx theories of development concepts and applications 6th letter to my daughter maya angelou chezer certain personal matters outlawgaming long march the choctaws gift to irish famine relief border collie border collie flitby nclex review 4000study software for nclex rn individual", "Simmons has presented a survey of some fifteen experimental question-answering and related systems which have been constructed since 1959. These systems take input questions in natural English (subject to varying constraints) and attempt to answer the questions on the basis of a body of information, called the data base, which is stored inside the computer. This process can be conceptually divided into three phases---syntatic analysis, semantic analysis, and retrieval, as illustrated schematically in Figure 1. The first phase consists of parsing the input sentence into a structure which explicitly represents the grammatical relationships among the words of the sentence. Using this information the second component constructs a representation of the semantic content or \"meaning\" of the sentence. The remaining phase consists of procedures for either retrieving the answer directly from the data base, or else deducing the answer from information contained in the data base. The dotted lines in the figure represent the possible use of feedback from the later stages to aid in parsing and semantic interpretation.", "We describe an approach to parsing and logical translation that was inspired by Gazdar's work on context-free grammar for English. Each grammar rule consists of a syntactic part that specifies an acceptable fragment of a parse tree, and a semantic part that specifies how the logical formulas corresponding to the constituents of the fragment are to be combined to yield the formula for the fragment. However, we have sought to reformulate Gazdar's semantic rules so as to obtain more or less 'conventional' logical translations of English sentences, avoiding the interpretation of NPs as property sets and the use of intensional functors other than certain propositional operators. The reformulated semantic rules often turn out to be slightly simpler than Gazdar's. Moreover, by using a semantically ambiguous logical syntax for the preliminary translations, we can account for quantifier and coordinator scope ambiguities in syntactically unambiguous sentences without recourse to multiple semantic rules, and are able to separate the disambiguation process from the operation of the parser-translator. We have implemented simple recursive descent and left-corner parsers to demonstrate the practicality of our approach.", "Introducing a new hobby for other people may inspire them to join with you. Reading, as one of mutual hobby, is considered as the very easy hobby to do. But, many people are not interested in this hobby. Why? Boring is the reason of why. However, this feel actually can deal with the book and time of you reading. Yeah, one that we will refer to break the boredom in reading is choosing strategies for natural language processing as the reading material.", "Over the years, it has been argued that there ought to be an artificial intelligence (AI) system which can do both problem solving and language comprehension using the same database of knowledge. Such a system has not previously been constructed because researchers in these two areas have generally used rather different knowledge representations: the predicate calculus for problem solving and some frame-like representation for language comprehension. \nThis dissertation describes BRUIN, a unified AI system that can perform both problem-solving and language comprehension tasks. Included in the system is a frame-based knowledge-representation language called FRAIL, a problem solving component called NASL (which is based on McDermott's problem-solving language of the same name), and a context-recognition component known as PRAGMATICS. Examples that have been tested in this system are drawn from the inventory-control, restaurant and blocks-world domains. \nThe main intent of this dissertation is to describe how context recognition can be done in a problem solving environment. Also discussed is the knowledge representation language FRAIL and the relevant portions of the problem solver, NASL. Finally, there is a discussion of the problems with the context recognizer, PRAGMATICS and possibilities for future research." ], "authors": [ { "name": [ "M. Jones", "D. Warren" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Graeme Hirst", "Eugene Charniak" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "B. Phillips", "J. Hendler" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Rosenschein", "Stuart M. Shieber" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Gawron", "Jonathan J. King", "J. Lamping", "E. Loebner", "E. Anne Paulson", "G. Pullum", "Ivan Sag", "T. Wasow" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Wong" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Robert C. Moore" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Bobrow", "B. Webber" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Schank" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Mitchell P. Marcus" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Brachman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Gallin" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. Woods" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Lenhart K. Schubert", "F. J. Pelletier" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. Lehnert", "Martin H. Ringle" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Wong" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "39794097", "1213247", "1841979", "9564084", "14372141", "16055170", "18655604", "3003106", "60747992", "6616065", "58814991", "118696122", "14297121", "17712124", "60545533", "59778774" ], "intents": [ [ "background" ], [ "background" ], [], [], [], [], [ "background" ], [], [], [ "background" ], [ "methodology" ], [ "background" ], [], [], [], [] ], "isInfluential": [ false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false ] }
Problem: The paper addresses the ad hoc nature of semantic interpretation in natural language understanding (NLU) systems, particularly in the translation from parse trees to semantic representations. Solution: The paper proposes using a Montague-inspired approach to semantics in an integrated NLU and problem-solving system, aiming to provide a cleaner and more robust foundation for semantic interpretation by replacing Montague's semantic objects with elements of the frame language Frail and incorporating word sense and case slot disambiguation systems.
500
0.03
null
null
null
null
null
null
null
null
a831417056d33ad6cd412ffafbeadfef1ce5e8a7
691094
null
Discourse Pragmatics and Ellipsis Resolution in Task-Oriented Natural Language Interfaces
This paper reviews discourse phenomena that occur frequently in task.oriented man.machine dialogs, reporting on a~n empirical study that demonstrates the necessity of handling ellipsis, anaphora, extragrammaticality, inter-sentential metalanguage, and other abbreviatory devices in order to achieve convivial user interaction. Invariably, users prefer to generate terse or fragmentary utterances instead of longer, more complete "standalone" expressions, even when given clear instructions tO the contrary. The XCALIBUR exbert system interface is designed to meet these needs, including generalized ellipsis resolution by means of a rule-based caseframe method superior tO previous semantic grammar approaches.
{ "name": [ "Carbonell, Jaime G." ], "affiliation": [ null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
23
61
null
Discourse PhenomenaNatural language discourse exhibits several intriguing phenomena that defy definitive linguistic analysis and general computational solutions. However, some progress has been made in developing tractable computational solutions to simplified version of phenomena such as ellipsis and anaphora resolution [20, 10, 211 . This paper reviews discourse phenomena that arise ~n task.oriented dialogs with responsive agents (such as expert systems, rather than purely passive data base query systems), outlines the results of an empirical study, and presents our method for handling generalized ellipsis resolution in the XCALIBUR expert system interface. With the exception of intersentential metalanguage, and to a lesser degree extragrammaticality, the significance of the phenomena listed below have long been recognized and documented in the computational linguistics literature.• Anaphora --Interactive task-oriented dialogs invite the use of anaphora, much more so than simpler data base query situations.• Definite noun phrases --As Grosz [6] • Meta.lincjuistic utterances --Intra-sentential metalanguage has been investigated to some degree [18, 12J, but its more common inter-sententiai counterpart has received little attention [4}. However, utterances about other utterances (e,g,, corrections of previous commands, such as "1 meant to type X instead" or "1 should have said ...") are not infrequent in our dialogs, and we are making an initial stab at this problem [8}.Note that it is a cognitively less demanding task for a user to correct a previous utterance than to repeat an explicit sequence of commands (or worse yet, to detect and undo explicitly each and every unwanted consequence of a mistaken command).• indirect speech acts --Occasionally users will resort tO indirect speech acts [19. 16, 1] , especially in connection with inter.sentential metalanguage or by stating a desired state of affairs and expecting the system tO supply the sequence of actions necessary to achieve that state.In our prior work we have focused on extr~grammaticality and inter.sentential metalanguage. In this paper we report on an empLrical study of discourse phenomena to a s~mulated interface and on our work on generalized elhpsis resolutLon in the context of the XCALIBUR project,
null
This section outlines the XCALIBUR project, whose objective is to provide flexible natural language access (comprehension and generation) to the XSEL expert system [15] . XSEL, the Digital Equipment Corporation's automated salesman's assistant, advises on selection of appropriate VAX components and produces a sales order for automatic configuration by the R1 system [14] . Part of the XSEL task is to provide the user with information about DEC components, hence subsuming the database query task. However, unlike a pure data base query system, an expert system interface must also interpret cnm"'~ndS, understand assertions of new information, and carry out taskoriented dialogs (such as those discussed by Grosz [6] ). XCALIBUR, in particular, deals with commands to modify an order, as well as information requests pertaining to its present task or itS data base of VAX component parts. In the near future it should process clarificational dialogs when the underlying expert system (i.e. XSEL) requires additional information or advice, as illustrated in the sample dialog below:>What is the largest 11780 fixed disk under $40,000?The rp07-aa is a 516 M8 fixed pack disk that costs $38,000.>The largest under $50,000?The rp07-aa.>Add two rpO7-aa disks to my order. llndicative as these empirical studies are of where one must focus one's efforts in developing convivial interfaces, they were not performed with adeqgato control groups or statistical rigor. Therefore. there is ample room to confirm. refute or expand upon lhe detads of our emoirical findings. However. the surprisingly strong form in which Grice's maxgm [5] manifests itself in task-oa~ented human computer d=alogs seems qualitatively irrefutable. resolution and focused natural language generation. Figure 3 .1 provides a simplified view of the major modules of XCALIBUR, and the reader is referred to [3] for further elaboration.When XSEL is ready to accept input, the information handler is passed a message indicating the case frame or class of case frames expected as a response. For our example, assume that a command or query is expected, the parser is notified, and the user enters >What is the price of the 2/argest dual port fixed media disks?The parser returns;[QUERY (OBJECT (SELECT (disk Rather than delving into the details of the representation or the manner in which it is transformed prior to generating an internal command to XSEL, consider some of the functions of the information handler:• Defaults must be instantiated. In the example, the query does not explicitly name an INFO.SOURCE, which could be the component database, the current set of line.items, or a set of disks brought into focus by the preceding dialog.• Ambiguous fillers or attribute names must be resolved. For example, in most contexts. "300 MB disk" means a disk with "greater than or equal to 300 ME]" rather than strictly "equal to 300 MB", A "large" disk refers to ample memory capacity in the context of a functional component specification, but to large physical dimensions during site planning, Presently, a small amount of local pragmatic knowledge suffices for the analysis, but. in the general case. closer integration with XSEL may be required.• Generalized ellipsis resolution, as presented below, occurs within the information handler.As the reader may note, the present raison d'etre of the information manager ts to act as a repository of task and dialog knowledge providing information that the user did not feel necessary to convey explicitly. Additionally. the information handler routes the parsed command or query to the appropriate knowledge source, be it an external static data base, an expert system, or a dynamically constructed data structure (such as the current VAX order). Our plans call for incorporating a model of the user's task and knowledge state that should provide useful information to both parser and generator. At first, we intend to focus on stereotypical users such as a salesperson, a system engineer and a customer, who would have rather different domain knowledge, perhaps different vocabulary, and certainly different sets of tasks in m,nd. Eventually, refinements and updates to a default user model should be inferred from an analysis of the current dialog [17] .
The necessity to handle most of the discourse phenomena listed in the preceding section was underscored by an empirical study we conducted to ascertain the most pressing needs of natural language interfaces in interactive apl~lications, The initial objective of this study was to circumscribe the natural language interface task by attempting to instruct users of a simulated interface not to employ different discourse devices or difficult linguistic constructs. In essence, we wanted to determine whether untrained users would be able to interact as instructed (for instance avoiding all anaphoric referents), and, if so, whether they would still find the interface convivial given our artificial constraints.The basic experimental set-up consisted of two remotely located terminals linked to each other and a transaction log file that kept a record of all interactions. The user wassituated at one terminal and was told he or she was communicating with a real natural language interface to an operating system (and an accompanying intelligent help system, not unlike Wilensky's Unix Consultant [23] .)The experimenter at the other terminal simulated the interface and gave appropriate commands to the (real) operating system.In different sessions, users were instructed not to use pronouns, to type only complete sentences, to avoid complex syntax, to type only direct commands or queries (e.g., no indirect speech acts or discourse-level metalinguistic utterances [4, 8] ), and to stick to the topic. The only instructions that were reliably followed were sticking to the topic (always) and avoiding complex syntax (usually). All other instructions were repeatedly violated in spite of constant negative feedback --that is, the person pretending to be the natural language program replied with a standard error message. I recorded some verbal responses as well (with users telling a secretary at the terminal what she should type), and, contrary to my expectations, these did not qualitatively differ from the typed utterances. The significant result here is that users appear incapable or unwilling to generate lengthy commands, queries or statements when they can employ a linguistic device to state the same proposition in a more terse manner. To restate the principle more succinctly: Given these results, we concluded that it was more appropriate to focus our investigations on handling abbreviatory discourse devices, rather than to address the issue of expanding our syntactic coverage to handle verbose complex structures seldom observed in our experience. In this manner, the objectives of the XCALIBUR project differ from those of most current investigations.The XCALIBUR system handles ellipsis at the case-frame level. Its coverage appears to be a superset of the LIFER/LADDER system [10, 11 ] and the PLANES ellipsis module [21 ] . Although it handles most of the ellipsed utterances we encountered, it is not meant to be a general linguistic solution to the ellipsis phenomenon.The following examples are illustrative of the kind of sentence fragments the current case-frame method handles. For brevity, assume that each sentence fragment occurs immediately following the initial query below.INITIAL QUERY: "What is the price of the three largest single port fixed media disks?" "Speed?" "Two smallest?." "How about the price of the two smallest" "also the smallest with dual ports" "Speed with two ports?" "Disk with two ports."In the representatwe examples above, punctuation is of no help, and pure syntax is of very limited utility. For instance, the last three phrases are syntactically similar (indeed, the last two are indistinguishable), but each requires that a different substitution be made on the preceding query. All three substitute the number of ports in the original SELECT field, but the first substitutes "ascending" for "descending" in the OPERATION field, the second subshtutes "speed" for "price" in the I~ROJECT field, and the third merely repeats the case header of the SELECT field.Ellipsis ~s resolved differently in the presence or absence of strong discourse expectations. In the former case, the discourse expectatmon rules are tested first, and, if they fad to resolve the sentence fragment, the Contextual substitution rules are tned. If there are no strong d~scourse expectations, the contextual substitution rules are invoked directly.The system generated a query f,or confirmation or The utterance "try 300 or faster" is syntactically a complete sentence, but semantically ,t is lust as fragmentary as the previous utterances. The strong discourse expectations, however, suggest that it be processed in the same manner as syntactically incomplete utterances, since Jt satisfies the expectations of the interactive task The terseness principle operates at all levels: syntactic, semantic and pragmatic.The contextual substitution rules exploit the semantic representation of queries and commands discussed in the previous section. The scope of these rules, however, is limited to the last user interaction of appropriate type in the dialog focus, as ='llustrated in the following example: This rule resolves the following kind of ellipsis:>What is the size of the 3 largest single port fixed media disks? >disks with two ports?Note that it is impossible to resolve this kind of ellipsis in a general manner if the previous query is stored verbatim or as a a semantic-grammar parse tree. "Disks with two ports" would at best correspond to some <disk-descriptor'> non-terminal, and hence, according to the LIFER algorithm [lO, 11] , would replace the entire phrase "single port fixed media disks" that corresponded to <disk-descriptor> in the parse of the original query. However, an informal poll of potential users suggests that the preferred interpretation of the ellipsis retains the MEDIA specifier of the original query. The ellipsis resolution process, therefore, requires a finer grain substation method than simply inserting the highest level non-terminals in the in the ellipsed input in place of the matching non-terminals in the parse tree of the previous utterance.Taking advantage of the fact that a case frame analysis of a sentence or object description captures the meaningful semantic relations among its constituents in a canonical manner, a partially instantiated nominal case frame can be merged with the previous case frame as follows: = Substitute any cases instantiated in the original query that the ellipsis specifically overrides. For instance "with two ports" overrides "single port" in our example, as both entail different values of the same case descriptor, regardless of their different syntactic roles. ("Single port" in the original query is an adjectival construction, whereas "with two ports" is a post-nominal modifier in the ellipsed fragment.)• Retain any cases in the original parse that are not explicitly contradicted by new information in the ellipsed fragment. For instance, "fixed media" is retained as part of the disk description, as are all the sentential-level cases in the original query, SUCh as the quantity specifier and the projection attribute of the query ("size").• Add cases of a case frame in the query that are not instantiated therein, but are specified in the ellipsed fragment. For instance, the "fixed head" descriptor is added as the media case of the disk nominal case frame in resolving the etlipsed fragment in the following example:>Which disks are configurable on a VAX 11.7807 >Any conligurable fixed head disks?• In the event that a new case frame is mentioned in the ellipsed fragment, wholesale substitution occurs, much like in the semantic grammar approach. For instance, if after the last example one were to ask "How about tape drives?", the substitution would replace "fixed head disks" with "tape drives", rather than replacing only "disks" and producing the phrase "fixed head tape drives", which is meaningless in the current domain. In these instances the semantic relations captured in a case frame representation and not in a semantic grammar parse tree prove immaterial.The key tO case-frame ellipsis resolution is matching corresponding cases, rather than surface strings, syntactic structures, or non-canonical representations. It is true that in order to instantiate correctly a sentential or nominal case frame in the parsing process requires semantic knowledge, some of which can be rather domain specific. But, once the parse is attained, the resulting canonical representation, encoding appropriate semantic relations, can and should be exploited to provide the system with additional functionality such as the present ellipsis resolution method.The major problem with semantic grammars is that they convolve syntax with semantics in a manner that requires multiple representations for the same semantic entity. For instance, the ordering of marked cases in the input does not reflect any difference in meaning (almough one could argue that surface ordering may reflect differential emphasis and other pragmatic considerations). A pure semantic grammar must employ different rules to recognize each and every admissible case sequence. Hence, the resultant parse trees differ, and the knowledge that surface positioning of unmarked cases is meaningful, but positioning of ranked ones is not, must be contained within the ellipsis resolution process, a very unnatural repository for such basic information. Moreover, in order to attain a measure of the functionality described above for case-frames, ellipsis resolution in semantic grammar parse trees must somehow merge adjectival and post nominal forms (corresponding to different non-terminals and different relative positions in the parse trees) so that ellipsed structures such as "a disk with 1 port" can replace the the "dual-port" part of the phrase "...dual-port fixed-media disk " in an earlier utterance. One way to achieve this effect is to collect together specific nonterminals that can substitute for each other in certain contexts, in essence grouping non-canonical representations into semantic equivalence classes. However, this process would requ=re hand.crafting large associative tables or similar data structures, a high price to pay for each domain-specific semantic grammar. Hence, in order to achive robust ellipsis resolution all proverbial roads lead to recursive case constructions encoding domain semantics and canonical structure for multiple surface manifestations.Finally, consider one more rule that provides additional context in situations where the ellipsis is of a purely semantic nature, such as:The RP07-aa, the RP07.ab ....We need to answer the question "largest what?" before proceeding. One can call this problem a special case of definite noun phrase resolution, rather than semantic ellipses, but terminology is immaterial. Such phrases occur with regularity in our corpus of examples and must be resolved by a fairly general process. The following rule answers the question from context, regardless of the syntactic completeness of the new utterance.If:A command or query caseframe lacks one or more required case fillers (such as a missing SELECT field), and the last case frame in fOCUS has an instantiated case that meets a11 the semantic tests for the case missing the riller, XCALIBUR presently has eight contextual substitution rules. similar to the ones above, and we have found several additional ones to extend the coverage of ellipsed queries and commands (see [3] for a more extensive discussion). It is significant to note that a small set of fairly general rules exploiting the case frame structures cover most instances of commonly occurring ellipsis, including all the examples presented earlier in this section.
null
Main paper: a summary of task-oriented: Discourse PhenomenaNatural language discourse exhibits several intriguing phenomena that defy definitive linguistic analysis and general computational solutions. However, some progress has been made in developing tractable computational solutions to simplified version of phenomena such as ellipsis and anaphora resolution [20, 10, 211 . This paper reviews discourse phenomena that arise ~n task.oriented dialogs with responsive agents (such as expert systems, rather than purely passive data base query systems), outlines the results of an empirical study, and presents our method for handling generalized ellipsis resolution in the XCALIBUR expert system interface. With the exception of intersentential metalanguage, and to a lesser degree extragrammaticality, the significance of the phenomena listed below have long been recognized and documented in the computational linguistics literature.• Anaphora --Interactive task-oriented dialogs invite the use of anaphora, much more so than simpler data base query situations.• Definite noun phrases --As Grosz [6] • Meta.lincjuistic utterances --Intra-sentential metalanguage has been investigated to some degree [18, 12J, but its more common inter-sententiai counterpart has received little attention [4}. However, utterances about other utterances (e,g,, corrections of previous commands, such as "1 meant to type X instead" or "1 should have said ...") are not infrequent in our dialogs, and we are making an initial stab at this problem [8}.Note that it is a cognitively less demanding task for a user to correct a previous utterance than to repeat an explicit sequence of commands (or worse yet, to detect and undo explicitly each and every unwanted consequence of a mistaken command).• indirect speech acts --Occasionally users will resort tO indirect speech acts [19. 16, 1] , especially in connection with inter.sentential metalanguage or by stating a desired state of affairs and expecting the system tO supply the sequence of actions necessary to achieve that state.In our prior work we have focused on extr~grammaticality and inter.sentential metalanguage. In this paper we report on an empLrical study of discourse phenomena to a s~mulated interface and on our work on generalized elhpsis resolutLon in the context of the XCALIBUR project, an empirical study: The necessity to handle most of the discourse phenomena listed in the preceding section was underscored by an empirical study we conducted to ascertain the most pressing needs of natural language interfaces in interactive apl~lications, The initial objective of this study was to circumscribe the natural language interface task by attempting to instruct users of a simulated interface not to employ different discourse devices or difficult linguistic constructs. In essence, we wanted to determine whether untrained users would be able to interact as instructed (for instance avoiding all anaphoric referents), and, if so, whether they would still find the interface convivial given our artificial constraints.The basic experimental set-up consisted of two remotely located terminals linked to each other and a transaction log file that kept a record of all interactions. The user wassituated at one terminal and was told he or she was communicating with a real natural language interface to an operating system (and an accompanying intelligent help system, not unlike Wilensky's Unix Consultant [23] .)The experimenter at the other terminal simulated the interface and gave appropriate commands to the (real) operating system.In different sessions, users were instructed not to use pronouns, to type only complete sentences, to avoid complex syntax, to type only direct commands or queries (e.g., no indirect speech acts or discourse-level metalinguistic utterances [4, 8] ), and to stick to the topic. The only instructions that were reliably followed were sticking to the topic (always) and avoiding complex syntax (usually). All other instructions were repeatedly violated in spite of constant negative feedback --that is, the person pretending to be the natural language program replied with a standard error message. I recorded some verbal responses as well (with users telling a secretary at the terminal what she should type), and, contrary to my expectations, these did not qualitatively differ from the typed utterances. The significant result here is that users appear incapable or unwilling to generate lengthy commands, queries or statements when they can employ a linguistic device to state the same proposition in a more terse manner. To restate the principle more succinctly: Given these results, we concluded that it was more appropriate to focus our investigations on handling abbreviatory discourse devices, rather than to address the issue of expanding our syntactic coverage to handle verbose complex structures seldom observed in our experience. In this manner, the objectives of the XCALIBUR project differ from those of most current investigations. a sketch of the ×calibur interface: This section outlines the XCALIBUR project, whose objective is to provide flexible natural language access (comprehension and generation) to the XSEL expert system [15] . XSEL, the Digital Equipment Corporation's automated salesman's assistant, advises on selection of appropriate VAX components and produces a sales order for automatic configuration by the R1 system [14] . Part of the XSEL task is to provide the user with information about DEC components, hence subsuming the database query task. However, unlike a pure data base query system, an expert system interface must also interpret cnm"'~ndS, understand assertions of new information, and carry out taskoriented dialogs (such as those discussed by Grosz [6] ). XCALIBUR, in particular, deals with commands to modify an order, as well as information requests pertaining to its present task or itS data base of VAX component parts. In the near future it should process clarificational dialogs when the underlying expert system (i.e. XSEL) requires additional information or advice, as illustrated in the sample dialog below:>What is the largest 11780 fixed disk under $40,000?The rp07-aa is a 516 M8 fixed pack disk that costs $38,000.>The largest under $50,000?The rp07-aa.>Add two rpO7-aa disks to my order. llndicative as these empirical studies are of where one must focus one's efforts in developing convivial interfaces, they were not performed with adeqgato control groups or statistical rigor. Therefore. there is ample room to confirm. refute or expand upon lhe detads of our emoirical findings. However. the surprisingly strong form in which Grice's maxgm [5] manifests itself in task-oa~ented human computer d=alogs seems qualitatively irrefutable. resolution and focused natural language generation. Figure 3 .1 provides a simplified view of the major modules of XCALIBUR, and the reader is referred to [3] for further elaboration.When XSEL is ready to accept input, the information handler is passed a message indicating the case frame or class of case frames expected as a response. For our example, assume that a command or query is expected, the parser is notified, and the user enters >What is the price of the 2/argest dual port fixed media disks?The parser returns;[QUERY (OBJECT (SELECT (disk Rather than delving into the details of the representation or the manner in which it is transformed prior to generating an internal command to XSEL, consider some of the functions of the information handler:• Defaults must be instantiated. In the example, the query does not explicitly name an INFO.SOURCE, which could be the component database, the current set of line.items, or a set of disks brought into focus by the preceding dialog.• Ambiguous fillers or attribute names must be resolved. For example, in most contexts. "300 MB disk" means a disk with "greater than or equal to 300 ME]" rather than strictly "equal to 300 MB", A "large" disk refers to ample memory capacity in the context of a functional component specification, but to large physical dimensions during site planning, Presently, a small amount of local pragmatic knowledge suffices for the analysis, but. in the general case. closer integration with XSEL may be required.• Generalized ellipsis resolution, as presented below, occurs within the information handler.As the reader may note, the present raison d'etre of the information manager ts to act as a repository of task and dialog knowledge providing information that the user did not feel necessary to convey explicitly. Additionally. the information handler routes the parsed command or query to the appropriate knowledge source, be it an external static data base, an expert system, or a dynamically constructed data structure (such as the current VAX order). Our plans call for incorporating a model of the user's task and knowledge state that should provide useful information to both parser and generator. At first, we intend to focus on stereotypical users such as a salesperson, a system engineer and a customer, who would have rather different domain knowledge, perhaps different vocabulary, and certainly different sets of tasks in m,nd. Eventually, refinements and updates to a default user model should be inferred from an analysis of the current dialog [17] . generalized caseframe ellipsis: The XCALIBUR system handles ellipsis at the case-frame level. Its coverage appears to be a superset of the LIFER/LADDER system [10, 11 ] and the PLANES ellipsis module [21 ] . Although it handles most of the ellipsed utterances we encountered, it is not meant to be a general linguistic solution to the ellipsis phenomenon.The following examples are illustrative of the kind of sentence fragments the current case-frame method handles. For brevity, assume that each sentence fragment occurs immediately following the initial query below.INITIAL QUERY: "What is the price of the three largest single port fixed media disks?" "Speed?" "Two smallest?." "How about the price of the two smallest" "also the smallest with dual ports" "Speed with two ports?" "Disk with two ports."In the representatwe examples above, punctuation is of no help, and pure syntax is of very limited utility. For instance, the last three phrases are syntactically similar (indeed, the last two are indistinguishable), but each requires that a different substitution be made on the preceding query. All three substitute the number of ports in the original SELECT field, but the first substitutes "ascending" for "descending" in the OPERATION field, the second subshtutes "speed" for "price" in the I~ROJECT field, and the third merely repeats the case header of the SELECT field.Ellipsis ~s resolved differently in the presence or absence of strong discourse expectations. In the former case, the discourse expectatmon rules are tested first, and, if they fad to resolve the sentence fragment, the Contextual substitution rules are tned. If there are no strong d~scourse expectations, the contextual substitution rules are invoked directly.The system generated a query f,or confirmation or The utterance "try 300 or faster" is syntactically a complete sentence, but semantically ,t is lust as fragmentary as the previous utterances. The strong discourse expectations, however, suggest that it be processed in the same manner as syntactically incomplete utterances, since Jt satisfies the expectations of the interactive task The terseness principle operates at all levels: syntactic, semantic and pragmatic.The contextual substitution rules exploit the semantic representation of queries and commands discussed in the previous section. The scope of these rules, however, is limited to the last user interaction of appropriate type in the dialog focus, as ='llustrated in the following example: This rule resolves the following kind of ellipsis:>What is the size of the 3 largest single port fixed media disks? >disks with two ports?Note that it is impossible to resolve this kind of ellipsis in a general manner if the previous query is stored verbatim or as a a semantic-grammar parse tree. "Disks with two ports" would at best correspond to some <disk-descriptor'> non-terminal, and hence, according to the LIFER algorithm [lO, 11] , would replace the entire phrase "single port fixed media disks" that corresponded to <disk-descriptor> in the parse of the original query. However, an informal poll of potential users suggests that the preferred interpretation of the ellipsis retains the MEDIA specifier of the original query. The ellipsis resolution process, therefore, requires a finer grain substation method than simply inserting the highest level non-terminals in the in the ellipsed input in place of the matching non-terminals in the parse tree of the previous utterance.Taking advantage of the fact that a case frame analysis of a sentence or object description captures the meaningful semantic relations among its constituents in a canonical manner, a partially instantiated nominal case frame can be merged with the previous case frame as follows: = Substitute any cases instantiated in the original query that the ellipsis specifically overrides. For instance "with two ports" overrides "single port" in our example, as both entail different values of the same case descriptor, regardless of their different syntactic roles. ("Single port" in the original query is an adjectival construction, whereas "with two ports" is a post-nominal modifier in the ellipsed fragment.)• Retain any cases in the original parse that are not explicitly contradicted by new information in the ellipsed fragment. For instance, "fixed media" is retained as part of the disk description, as are all the sentential-level cases in the original query, SUCh as the quantity specifier and the projection attribute of the query ("size").• Add cases of a case frame in the query that are not instantiated therein, but are specified in the ellipsed fragment. For instance, the "fixed head" descriptor is added as the media case of the disk nominal case frame in resolving the etlipsed fragment in the following example:>Which disks are configurable on a VAX 11.7807 >Any conligurable fixed head disks?• In the event that a new case frame is mentioned in the ellipsed fragment, wholesale substitution occurs, much like in the semantic grammar approach. For instance, if after the last example one were to ask "How about tape drives?", the substitution would replace "fixed head disks" with "tape drives", rather than replacing only "disks" and producing the phrase "fixed head tape drives", which is meaningless in the current domain. In these instances the semantic relations captured in a case frame representation and not in a semantic grammar parse tree prove immaterial.The key tO case-frame ellipsis resolution is matching corresponding cases, rather than surface strings, syntactic structures, or non-canonical representations. It is true that in order to instantiate correctly a sentential or nominal case frame in the parsing process requires semantic knowledge, some of which can be rather domain specific. But, once the parse is attained, the resulting canonical representation, encoding appropriate semantic relations, can and should be exploited to provide the system with additional functionality such as the present ellipsis resolution method.The major problem with semantic grammars is that they convolve syntax with semantics in a manner that requires multiple representations for the same semantic entity. For instance, the ordering of marked cases in the input does not reflect any difference in meaning (almough one could argue that surface ordering may reflect differential emphasis and other pragmatic considerations). A pure semantic grammar must employ different rules to recognize each and every admissible case sequence. Hence, the resultant parse trees differ, and the knowledge that surface positioning of unmarked cases is meaningful, but positioning of ranked ones is not, must be contained within the ellipsis resolution process, a very unnatural repository for such basic information. Moreover, in order to attain a measure of the functionality described above for case-frames, ellipsis resolution in semantic grammar parse trees must somehow merge adjectival and post nominal forms (corresponding to different non-terminals and different relative positions in the parse trees) so that ellipsed structures such as "a disk with 1 port" can replace the the "dual-port" part of the phrase "...dual-port fixed-media disk " in an earlier utterance. One way to achieve this effect is to collect together specific nonterminals that can substitute for each other in certain contexts, in essence grouping non-canonical representations into semantic equivalence classes. However, this process would requ=re hand.crafting large associative tables or similar data structures, a high price to pay for each domain-specific semantic grammar. Hence, in order to achive robust ellipsis resolution all proverbial roads lead to recursive case constructions encoding domain semantics and canonical structure for multiple surface manifestations.Finally, consider one more rule that provides additional context in situations where the ellipsis is of a purely semantic nature, such as:The RP07-aa, the RP07.ab ....We need to answer the question "largest what?" before proceeding. One can call this problem a special case of definite noun phrase resolution, rather than semantic ellipses, but terminology is immaterial. Such phrases occur with regularity in our corpus of examples and must be resolved by a fairly general process. The following rule answers the question from context, regardless of the syntactic completeness of the new utterance.If:A command or query caseframe lacks one or more required case fillers (such as a missing SELECT field), and the last case frame in fOCUS has an instantiated case that meets a11 the semantic tests for the case missing the riller, XCALIBUR presently has eight contextual substitution rules. similar to the ones above, and we have found several additional ones to extend the coverage of ellipsed queries and commands (see [3] for a more extensive discussion). It is significant to note that a small set of fairly general rules exploiting the case frame structures cover most instances of commonly occurring ellipsis, including all the examples presented earlier in this section. Appendix:
null
null
null
null
{ "paperhash": [ "wilensky|talking_to_unix_in_english:_an_overview_of_an_on-line_unix_consultant", "hayes|a_framework_for_processing_corrections_in_task-oriented_dialogues", "hayes|multi-strategy_construction-specific_parsing_for_flexible_data_base_query_and_update", "carbonell|dynamic_strategy_selection_in_flexible_parsing", "rich|building_and_exploiting_user_models", "kwasny|ungrammaticality_and_extra-grammaticality_in_natural_language_understanding_systems", "sidner|towards_a_computational_theory_of_definite_anaphora_comprehension_in_english_discourse", "perrault|speech_acts_as_a_basis_for_understanding_dialogue_coherence", "hendrix|developing_a_natural_language_interface_to_complex_data", "waltz|writing_a_natural_language_data_base_system", "hayes|multi-strategy_parsing_and_its_role_in_robust_man-machine_communication", "grosz|the_representation_and_use_of_focus_in_dialogue_understanding." ], "title": [ "Talking to UNIX in English: An Overview of an On-Line UNIX Consultant", "A Framework for Processing Corrections in Task-Oriented Dialogues", "Multi-Strategy Construction-Specific Parsing for Flexible Data Base Query and Update", "Dynamic Strategy Selection in Flexible Parsing", "Building and Exploiting User Models", "Ungrammaticality and Extra-Grammaticality in Natural Language Understanding Systems", "Towards a computational theory of definite anaphora comprehension in English discourse", "Speech Acts as a Basis for Understanding Dialogue Coherence", "Developing a natural language interface to complex data", "Writing a Natural Language Data Base System", "Multi-strategy parsing and its role in robust man-machine communication", "The representation and use of focus in dialogue understanding." ], "abstract": [ "UC (UNIX Consultant) is an intelligent natural language interface that allows naive users to communicate with the UNIX operating system in ordinary English. The goal of UC is to provide a natural language help facility that allows new users to learn operating system conventions in a relatively painless way. UC is not meant to be a substitute for a good operating system command interpreter, but rather, an additional tool at the disposal of the new user, to be used in conjunction with other operating system components. UC allows the user to engage in natural language dialogues with the operating system. While there are a number of other natural language interfaces available today, these are mostly used as natural language front ends to particular data bases (for example, see Hayes & Carbonell 1981, Hendrix 1977, Robinson 1982, Waltz et al. 1976, and Woods 1970). In contrast, the user uses UC in order to learn how better to use the UNIX environment in which UC is embedded. UC can handle requests stated in a wide variety of forms, and has a number of features to enhance its function as a user interface.", "Mundane discourse abounds with utterances referring to other utterances. These meta-language utterances appear with surprising frequency in task-oriented dialogues, such as those arising in the context of a natural language interface to an operating system. This paper identifies some simpler types of dialogue-level metalanguage utterance and provides a computational framework to process such phrases in the context of a case-frame parser exploiting strongly-typed domain semantics.", "The advantages of a multi-strategy, construction-specific approach to parsing in applied natural language processing are explained through an examination of two pilot parsers we have constructed. Our approach exploits domain semantics and prior knowledge of expected constructions, using multiple parsing strategies each optimized to recognize different construction types. It is shown that a multi strategy approach leads to robust, flexible, and efficient parsing of both grammatical and ungrammatical input in limited-domain, task oriented, natural language interfaces. We also describe plans to construct a single, practical, multi-strategy parsing system that combines the best aspects of the two simpler parsers already implemented into a more complex, embedded-constituent control structure. Finally, we discuss some issues in data base access and update, and show that a construction-specific approach, coupled with a case structured data base description, offers a promising approach to a unified, interactive data base query and update system.", "Robust natural language interpretation requires strong semantic domain models, \"fail-soft\" recovery heuristics, and very flexible control structures. Although single-strategy parsers have met with a measure of success, a multi-strategy approach is shown to provide a much higher degree of flexibility, redundancy, and ability to bring task-specific domain knowledge (in addition to general linguistic knowledge) to bear on both grammatical and ungrammatical input. A parsing algorithm is presented that integrates several different parsing strategies, with case-frame instantiation dominating. Each of these parsing strategies exploits different types of knowledge; and their combination provides a strong framework in which to process conjunctions, fragmentary input, and ungrammatical structures, as well as less exotic, grammatically correct input. Several specific heuristics for handling ungrammatical input are presented within this multi-strategy framework.", "This paper describes the issues involved in building and exploiting individual user models in order to guide the performance of an interactive system. A system called Grundy, that recommends novels to people, is described and analyzed as a forum in which to explore those issues. One of the major techniques exploited in Grundy is the use of stereotypes as a mechanism for quickly producing an initial approximation to a model of the user. Experiments with the system show that despite the fact that stereotypes are inherently imperfect, their use does contibute significantly to the system's ability to build useful models of its users.", "Among the components included in Natural Language Understanding (NLU) systems is a grammar which spec i f i es much o f the l i n g u i s t i c s t ruc tu re o f the ut terances tha t can be expected. However, i t is ce r ta in tha t inputs that are ill-formed with respect to the grammar will be received, both because people regularly form ungra=cmatical utterances and because there are a variety of forms that cannot be readily included in current grammatical models and are hence \"extra-grammatical\". These might be rejected, but as Wilks stresses, \"...understanding requires, at the very least, ... some attempt to interpret, rather than merely reject, what seem to be ill-formed utterances.\" [WIL76]", "Abstract : This report investigates the process of focussing as a description and explanation of the comprehension of certain anaphoric expressions in English discourse. The investigation centers on the interpretation of definite anaphora, that is, on the personal pronouns, and noun phrases used with a definite article the, this, or that. Focussing is formalized as a process in which a speaker centers attention on a particular aspect of the discourse. An algorithmic description specifies what the speaker can focus on and how the speaker may change the focus of the discourse as the discourse unfolds. The algorithm allows for a simple focussing mechanism to be constructed: an element in focus, an ordered collection of alternate foci, and a stack of old foci. The data structure for the element in focus is a representation which encodes a limited set of associations between it and other elements from the discourse as well as from general knowledge. This report also establishes other constraints which are needed for the successful comprehension of anaphoric expressions. The focussing mechanism is designed to take advantage of syntactic and semantic information encoded as constraints on the choice of anaphora interpretation. These constraints are due to the work of language researchers; and the focussing mechanism provides a principled means for choosing when to apply the constraints in the comprehension process.", "Webster's dictionary defines \"coherence\" as \"the quality of being logically integrated, consistent, and intelligible\". If one were asked whether a sequence of physical acts being performed by an agent was coherent, a crucial factor in the decision would be whether the acts were perceived as contributing to the achievement of an overall goal. In that case they can frequently be described briefly, by naming the goal or the procedure executed to achieve it. Once the intended goal has been conjectured, the sequence can be described as a more or less correct, more or less optimal attempt at the achievement of the goal.", "Aspects of an intelligent interface that provides natural language access to a large body of data distributed over a computer network are described. The overall system architecture is presented, showing how a user is buffered from the actual database management systems (DBMSs) by three layers of insulating components. These layers operate in series to convert natural language queries into calls to DBMSs at remote sites. Attention is then focused on the first of the insulating components, the natural language system. A pragmatic approach to language access that has proved useful for building interfaces to databases is described and illustrated by examples. Special language features that increase system usability, such as spelling correction, processing of incomplete inputs, and run-time system personalization, are also discussed. The language system is contrasted with other work in applied natural language processing, and the system's limitations are analyzed.", "We present a model for processing English requests for information from a relational data base. The model has as its main steps (a) locating semantic constituents of a request; (b) matching these constituents against larger templates called concept case frames; (c) filling in the concept case frame using information from the user's request, from the dialogue context and from the user's responses to questions posed by the system; and (d) generating a formal data base query using the collected information. Methods are suggested for constructing the components of such a natural language processing system for an arbitrary relational data base. The model has been applied to a large data base of aircraft flight and maintenance data to generate a system called PLANES; examples are drawn from this system.", "Robust natural language interpretation requires strong semantic domain models, \"fai lsoft\" recovery heuristics, flexible control structures, and focused user interaction when automatic correct ion proves infeasible. Al though single-strategy parsers have met with some success, a multi-strategy approach, with strategies selected dynamically according to the type of construct ion being parsed at any given time, is shown to provide a higher degree of flexibility, redundancy, and ability to bring task-specific domain knowledge (in addition to general linguistic knowledge) to bear on both grammatical and ungrammatical input. This construction-specif ic, multi-strategy approach can also help provide tightly focused interaction with the user in cases of semantic or structural ambiguity by allowing such ambiguities to be represented without dupl icat ion of unambiguous material. The approach also aids in task-specific language development by allowing direct interpretation of languages defined in terms natural to the task domain. A parsing algorithm integrating case-frame instantiation and partial pattern matching strategies is presented. The algorithm can deal with conjunct ions, fragmentary input, and ungrammatical structures, as well as less exotic, grammatically correct input. This research was sponsored in part by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 3597, monitored by the Air Force Avionics Laboratory under contract F33615-78-C-1551, and in part by the Air Force Office of Scientific Research under Contract F49620-79-C-0143. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of DARPA, the Air Force Office of Scientific Research or the US government. University Libraries Carnegie Mellon University Pittsburgh PA 15213-389©", "Abstract : This report develops a representation of focus of attention thatcircumscribes discourse contexts within a general representation ofknowledge. Focus of attention is essential to any comprehension processbecause what and how a person understands is strongly influenced bywhere his attention is directed at a given moment. To formalize thenotion of focus, the need for and the use of focus mechanisms areconsidered from the standpoint of building a computer system that canparticipate in a natural language dialogue with a ser, Two ranges offocus, global and immediate, are investigated, and representations forincorporating them in a computer system are developed.The global focus in which an utterance is interpreted is determinedby the total discourse and situational setting of the utterance. Itinfluences what is talked about, how different concepts are introduced,and how concepts are referenced. To encode global focuscomputationally, a representation is developed that highlights thoseitems that are relevant at a given place in a dialogue. The underlyingknowledge representation is segmented into subunits, called focusspaces, that contain those items that are in the focus of attention of adialogue participant during a particular part of the dialogue.Mechanisms are required for updating the focus representation,because, as a dialogue progresses, the objects and actions that arerelevant to the conversation, and therefore in the participants' focusof attention, change. Procedures are described for deciding when andhow to shift focus in task-oriented dialogues, i.e., in dialogues inwhich the participants are cooperating in a shared task. Theseprocedures are guided by a representation of the task being performed.The ability to represent focus of attention in a languageunderstanding system results in a new approach to an important problemin discourse comprehension -- the identification of the referents ofdefinite noun phrases." ], "authors": [ { "name": [ "R. Wilensky" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "P. Hayes", "J. Carbonell" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "P. Hayes", "J. Carbonell" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Carbonell", "P. Hayes" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "E. Rich" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Kwasny", "N. Sondheimer" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "C. Sidner" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "C. Raymond Perrault", "James F. Allen" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Hendrix", "E. Sacerdoti", "Daniel Sagalowicz", "Jonathan Slocum" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Waltz", "Bradley A. Goodman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "P. Hayes", "J. Carbonell" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "B. Grosz" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "18792027", "1428874", "18819553", "7271323", "41709947", "12695499", "41092026", "857232", "15391397", "2983985", "43871341", "61114426" ], "intents": [ [], [], [], [], [], [], [], [], [], [], [], [ "background" ] ], "isInfluential": [ false, false, false, false, false, false, false, false, false, false, false, false ] }
- Problem: The paper addresses discourse phenomena in task-oriented man-machine dialogs, emphasizing the necessity of handling ellipsis, anaphora, extragrammaticality, inter-sentential metalanguage, and other abbreviatory devices for achieving convivial user interaction. - Solution: The paper proposes that users prefer generating terse or fragmentary utterances over longer expressions, leading to the design of the XCALIBUR expert system interface with a rule-based caseframe method for generalized ellipsis resolution, aiming to improve user interaction in task-oriented dialogs.
500
0.122
null
null
null
null
null
null
null
null
b455fe4c0b80a03581d010bd221c1a54892b2cc0
9504386
null
Crossed Serial Dependencies: A low-power parseable extension to {GPSG}
An extension to the GPSG grammatical formalism is proposed, allowing non-terminals to consist of finite sequences of category labels, and allowing schematic variables to range over such sequences. The extension is shown to be sufficient to provide a strongly adequate grammar for crossed serial dependencies, as found in e.g. Dutch subordinate clauses. The structures induced for such constructions are argued to be more appropriate to data involving conjunction than some previous proposals have been. The extension is shown to be parseable by a simple extension to an existing parsing method for GPSG.
{ "name": [ "Thompson, Henry" ], "affiliation": [ null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
13
5
null
null
In a recent paper (Bresnan Kaplsn Peters end Zaenen 1982) (hereafter BKPZ), a solution to the Dutch problem was presented in terms of LFG (Kaplan and Bresnan 1982) , which is known to have considerably more than context-free power. (Steedman 1983) and (Joshi 1983) have also made proposals for solutions in terms of Steedman/Ades grammars and tree adjunction grammars (Ades and Steedman 1982; Joshi Levy and Yueh 1975) .In this paper I present a minimal extension to the GPSC formalism (Gazdar 1981c) CONS(CADR (Q')(a' )(CA~(Q' )),CDDR (Q ' )) (~ where Q' is short for SI, Z~,b ' ,CO~S(CAR (Q ' )(a') (S') ,CDR(Q ' ))(2 where Q' is short for Ziqh ' , ADJOIN(Z' ,b' ).( 3 These By suitably combining the rules (A) -(E), together with the meta-generated rules (Bi) -(Di), (Bii) and (Cii), we can now generate examples (2) (4). (4), which is fully crossed, is very similar to the example in section II.1, and uses meta-generated expansions for all its VP nodes:EQUATIONEQUATIONOnce again I include the relevant rule name in the margin, and indicate with subscripts the rule name feature introduced to enforce subcategorisation.Sentences (2) and (3) each involve two metagenerated rules and one ordinary one. For reasons of space, only (3) is illustrated below.(2) is similar, but using rules (B), (Cii), and (Di). For our purposes simple interpretations of (B) -(D) will suffice:s' (A) ~P vP (Rii) a ik [vP,Zb] (ci) .~Pc [Vb,Vc]~ ~ (E),(Di) Nikki VB') v'(vP') c') v' (NP' ,~') D') v'(NP').The semantics for the metarules is also reasonably straightforward, given that we know where we are going:I') F(V') ==> CONS(F(CAR(Z:V')),CDR(Z',V')) II') F(V',VP') ==> CONS(F(CADR(Q'),CAR(Q')), cm~(Q')),where Q' is short for VPlZl, V '. (I') will give semantics very much like those of rule (2) in section II.2, while (II') will give semantics like those of rule (I). (E °) is just like (3):E') ADJ01N(Z' ,W ' )It is left to the enthusiastic reader to work through the examples and see that all of sentences (I) -
null
null
null
Main paper: : In a recent paper (Bresnan Kaplsn Peters end Zaenen 1982) (hereafter BKPZ), a solution to the Dutch problem was presented in terms of LFG (Kaplan and Bresnan 1982) , which is known to have considerably more than context-free power. (Steedman 1983) and (Joshi 1983) have also made proposals for solutions in terms of Steedman/Ades grammars and tree adjunction grammars (Ades and Steedman 1982; Joshi Levy and Yueh 1975) .In this paper I present a minimal extension to the GPSC formalism (Gazdar 1981c) CONS(CADR (Q')(a' )(CA~(Q' )),CDDR (Q ' )) (~ where Q' is short for SI, Z~,b ' ,CO~S(CAR (Q ' )(a') (S') ,CDR(Q ' ))(2 where Q' is short for Ziqh ' , ADJOIN(Z' ,b' ).( 3 These By suitably combining the rules (A) -(E), together with the meta-generated rules (Bi) -(Di), (Bii) and (Cii), we can now generate examples (2) (4). (4), which is fully crossed, is very similar to the example in section II.1, and uses meta-generated expansions for all its VP nodes:EQUATIONEQUATIONOnce again I include the relevant rule name in the margin, and indicate with subscripts the rule name feature introduced to enforce subcategorisation.Sentences (2) and (3) each involve two metagenerated rules and one ordinary one. For reasons of space, only (3) is illustrated below.(2) is similar, but using rules (B), (Cii), and (Di). For our purposes simple interpretations of (B) -(D) will suffice:s' (A) ~P vP (Rii) a ik [vP,Zb] (ci) .~Pc [Vb,Vc]~ ~ (E),(Di) Nikki VB') v'(vP') c') v' (NP' ,~') D') v'(NP').The semantics for the metarules is also reasonably straightforward, given that we know where we are going:I') F(V') ==> CONS(F(CAR(Z:V')),CDR(Z',V')) II') F(V',VP') ==> CONS(F(CADR(Q'),CAR(Q')), cm~(Q')),where Q' is short for VPlZl, V '. (I') will give semantics very much like those of rule (2) in section II.2, while (II') will give semantics like those of rule (I). (E °) is just like (3):E') ADJ01N(Z' ,W ' )It is left to the enthusiastic reader to work through the examples and see that all of sentences (I) - Appendix:
null
null
null
null
{ "paperhash": [ "thompson|chart_parsing_and_rule_schemata_in_psg", "steedman|on_the_generality_of_the_nested-dependency_constraint_and_the_reason_for_an_exception_in_dutch" ], "title": [ "Chart Parsing and Rule Schemata in PSG", "On the generality of the nested-dependency constraint and the reason for an exception in Dutch" ], "abstract": [ "In this paper I want to describe how I have used MCHART in beginning to construct a parser for gr-mm-rs expressed in PSG, and how aspects of the chart parsing approach in general and MCHART in particular have made it easy to acco~mmodate two significant aspects of PSG: rule schemata involving variables over categories; and compound category symbols (\"slash\" categories). To do this I will briefly introduce the basic ideas of chart parsing; describe the salient aspects of MEHART; give an overview of PSG; and finally present the interesting aspects of the parser I am building for PSG using MCHART. Limitations of space, time, and will mean that all of these sections will be brief and sketchy I hope to produce a much expanded version at a later date.", "Several recent papers have argued that the phenomena which have been widely assumed to demand the inclusion of transformations in grammars of natural languages can in fact be handled within systems of lesser power, such as context-free grammar (cf. Brame 1976, 1978; Bresnan 1978; Gazdar 1981; Peters 1980). In a paper called On the order of words' (Ades and Steedman 1982; hereafter OOW) it was similarly argued that many of the constraints on unbounded movement that have been observed within the transformationalist approach could be explained within a theory consisting of a categorial grammar (Ajdukiewicz 1935; Bar-Hillel et al. 1960; Lyons 1968) augmented with a small number of simple rule schemata called (because of their resemblance to the operations of a parser) 'combination rules'. The present paper concerns the application of this theory of grammar to certain constructions allegedly involving 'crossed dependencies'. Such constructions are widely assumed to present a serious challenge for alternatives to the transformationalist approach. The theory put forward in OOW took as its point of departure the observation concerning movement and other dependencies in natural languages, that usually goes by the name of the 'nested-dependency constraint'. There is a strong tendency among the languages of the world to forbid constructions like (a), where the dependencies (indicated by subscripts) between elements of the sentence 'intersect' — that is, where one of a pair of dependent items intervenes between the members of another pair. Usually only their nonintersecting relatives are allowed, as in (b)." ], "authors": [ { "name": [ "H. Thompson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Mark Steedman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "15974189", "143831650" ], "intents": [ [ "background" ], [ "methodology", "background" ] ], "isInfluential": [ false, false ] }
Problem: The paper proposes an extension to the GPSG grammatical formalism to allow non-terminals to consist of finite sequences of category labels and schematic variables to range over such sequences. Solution: The extension is shown to be sufficient to provide a strongly adequate grammar for crossed serial dependencies in Dutch subordinate clauses, offering more appropriate structures for data involving conjunction compared to previous proposals.
500
0.01
null
null
null
null
null
null
null
null
c94e2841531c8a43b4dd172897a93af01e89421e
10056961
null
Design of a Knowledge-Based Report Generator
Knowledge-Based Report Generation is a technique for automatically generating natural language reports from computer databases. It is so named because it applies knowledge-based expert systems software to the problem of text generation. The first application of the technique, a system for generating natural language stock reports from a daily stock quotes database, is partially implemented. Three fundamental principles of the technique are its use of domain-specific semantic and linguistic knowledge, its use of macro-level semantic and linguistic constructs (such as whole messages, a phrasal lexicon, and a sentence-combining grammar), and its production system approach to knowledge representation.
{ "name": [ "Kukich, Karen" ], "affiliation": [ null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
16
205
null
REPORT GENERATION A knowledge-based report generator is a computer program whose function is to generate natural language summaries from computer databases. For example, knowledge-based report generators can be designed to generate daily stock market reports from a stock quotes database, daily weather reports from a meteorological database, weekly sales reports from corporate databases, or quarterly economic reports from U. S. Commerce Department databases, etc. A separate generator must be implemented for each domain of discourse because each knowledge-based report generator contains domain-specific knowledge which is used to infer interesting messages from the database and to express those messages in the sublanguage of the domain of discourse. The technique of knowledge-based report generation is generalizable across domains, however, and the actual text generation component of the report generator, which comprises roughly one-quarter of the code, is directly transportable and readily tailorable.Knowledge-based report generation is a practical approach to text generation. It's three fundamental tenets are the following. First, it assumes that much domain-specific semantic, linguistic, and rhetoric knowledge is required in order for a computer to automatically produce intelligent and fluent text. Second, it assumes that production system languages, such as those used to build expert systems, are wellsuited to the task of representing and integrating semantic, linguistic, and rhetoric knowledge. Finally, it holds that macro-level knowledge units, such as whole seman-tic messages, a phrasal lexicon, clausal grammatical categories, and a clause-combining grammar, provide an appropriate level of knowledge representation for generating that type of text which may be categorized as periodic summary reports. These three tenets guide the design and implementation of a knowledge-based report generation system.The first application of the technique of knowledge-based report generation is a partially implemented stock report generator called Aria. Data from a Dow Jones stock quotes database serves as input to the system, and the opening paragraphs of a stock market summary are produced as output. As more semantic and linguistic knowledge about the stock market is added to the system, it will be able to generate longer, more informative reports. (1) after climbing steadily through most of the morning , the stock market was pushed downhill late in the day stock prices posted a small loss , with the indexes turning in a mixed showing yesterday in brisk trading .the Dow Jones average of 30 industrials surrendered a 16.28 gain at 4pro and declined slightly , finishing the day at 1083.61 ,off 0.18 points. 2wall street's securities markets rose steadily through most of the morning , before sliding downhill late in the day the stock market posted a small loss yesterday , with the indexes finishing with mixed results in active trading .the Dow Jones average of 30 industrials surrendered a 16.28 gain at 4pro and declined slightly , to finish at 1083.61 , off 0.18 points .In order to generate accurate and fluent summaries, a knowledge-based report generator performs two main tasks: first, it infers semantic messages from the data in the database; second, it maps those messages into phrases in its phrasal lexicon, stitching them together according to the rules of its clause-combining grammar, and incorporating rhetoric constraints in the process. As the work of McKeown I and Mann and Moore 2 demonstrates, neither the problem of deciding what to say nor the problem of determining how to say it is trivial, and as'Appelt 3 has pointed out, the distinction between them is not always clear.A knowledge-based report generator consists of the following four independent, sequential components: 1) a fact generator, 2) a message generator, 3) a discourse organizer, and 4) a text generator. Data from the database serves as input to the first module, which produces a stream of facts as output; facts serve as input to the second module, which produces a set of messages as out-put; messages form the input to the third module, which organizes them and produces a set of ordered messages as output; ordered messages form the input to the fourth module, which produces final text as output. The modules function independently and sequentially for the sake of computational manageability at the expense of psychological validity.With the exception of the first module, which is a straightforward C program, the entire system is coded in the OPS5 production system language. 4 At the time that the sample output above was generated, module 2, the message generator, consisted of 120 production rules; module 3, the discourse organizer contained 16 production rules; and module 4, the text generator, included 109 production rules and a phrasal dictionary of 519 entries. Real time processing requirements for each module on a lightly loaded VAX 11/780 processor were the following: phase 1 16 seconds, phase 2 -34 seconds, phase 3 -24 seconds, phase 4 -1 minute, 59 seconds.The fundamental knowledge constructs of the system are of two types: 1) static knowledge structures, or memory elements, which can be thought of as ndimensional propositions, and 2) dynamic knowledge structures, or production rules, which perform patternrecognition operations on n-dimensional propositions, Static knowledge structures come in five flavors: facts. messages, lexicon entries, medial text elements, and various control elements. Dynamic knowledge constructs occur in ten varieties: inference productions, ordering productions, discourse mechanics productions, phrase selection productions, syntax selection productions, anaphora selection productions, verb morphology productions, punctuation selection productions, writing productions, and various control productions.The function of the first module is to perform the arithmetic computation required to produce facts that contain the relevant information needed to infer interesting messages, and to write those facts in the OPS5 memory element format. For example, the fact that indicates the closing status of the Dow Jones Average of 30 Industrials for January 12, 1983 is:(make fact "fname CLb-~rAT "iname DJI "itype COMPOS "date 01/12 "hour CLOSE "openlevel 1084.25 "high-level 1105.13 "low-level 1075.88 "close-level 1083.61 "cumul-dir DN "cumul-deg 0.18)The function of the second module is to inter interesting messages from the facts using inferencing productions such as the following:(p instan-mixedup (goal "stat act "op instanmixed) (fact "(name CLSTAT "iname DJI "cumul-dir UP "repdate <date>) (fact "(name ADVDEC "iname NYSE "advances <x> "declines {<y> > <x>}) (make message "top GENMKT "subtop MIX "mix mixed "repdate <date> "subjclass MKT "tim close) (make goal "star pend "op writemessage) (remove 1) )This production infers that if the closing status of the Dow had a direction of "up', and yet the number of declines exceeded the number of advances for the day, then it can be said that the market was mixed. The message that is produced looks like this:(make message "repdate 01/12 "top GENMKT "subsubtop nil "subtop MIX "subjclass MKT "dir nil "deg nil "vardeg I nil ] "varlev I nil [ "mix mixed "chg nil "sco nil "tim close "vartim I nil i "dur nil "vol nil "who nil ) The inferencing process in phase 2 is hierarchically controlled.Module 3 performs the uncomplicated task of grouping messages into paragraphs, ordering messages within paragraphs, and assigning a priority number to each message. Priorities are assigned as a function of topic and subtopic. The system "knows" a default ordering sequence, and it "knows" some exception rules which assign higher priorities to messages of special significance, such as indicators hitting record highs. As in module 2, processing is hierarchically controlled. Eventually, modules 2 and 3 should be combined so that their knowledge could be shared.The most complicated processing is performed by module 4. This processing is not hierarchically controlled, but instead more closely resembles control in an ATN. Module 4, the text generator, coordinates and executes the following activities: 1) selection of phrases from the phrasal lexicon that both capture the semantic meaning of the message and satisfy rhetorical constraints; 2) selection of appropriate syntactic forms for predicate phrases, such as sentence, participial clause, prepositional phrase, etc.; 3) selection of appropriate anaphora for subject phrases 4) morphological processing of verbs; 5) interjection of appropriate punctuation; and 6) control of discourse mechanics, such as inclusion of more than one clause per sentence and more than one sentence per paragraph.The module 4 processor is able to coordinate and execute these activities because it incorporates and integrates the semantic, syntactic, and rhetoric knowledge it needs into its static and dynamic knowledge structures. For example, a phrasal lexicon entry that might match the "mixed market" message is the following:(make phraselex "top GENMKT "subtop MIX "mix mixed "chg nil "tim close "subjtype NAME "subjclass MKT *predfs turned Apredfpl turned "predpart turning "predinf ~to turnl ^predrem ~n a mixed showing] "fen 9 "rand 5 "imp 11) An example of a syntax selection production th,tt would select the syntactic form subordinate-participial-clause as an appropriate form for a phrase (a~) in "after rising steadily through most of the morning") is the following:(p 5.selectsu borpartpre-selectsyntax (goal ^stat act "op selectsyntax); 1 (sentreq "sentstat nil) ; 2 (message "foc in "top <t> "tim <> nil "subjclass <sc>) ; 3 (message "foc nil "top <t> "tim <> nil "subjclass <sc>) ; 4 (paramsynforms "suborpartpre <set>) (randnum "randval < <set>) (lastsynform "form << initsent prepp >> ) -(openingsynform "form < < suborsent suborpart > >) -(message "foc in "tim close) For each message, in sequence, the system first selects a predicate phrase that matches the semantic content of the message, and next selects a syntactic form. such as sentence or prepositional phrase, into which form the predicate phrase may be hammered. The system's default goal is to form complex sentences by combining a variable number of messages expressed m a variety of syntactic forms in each sentence. Every message may be expressed in the syntactic form of a simple sentence. But under certain grammatical and rhetorical conditions, which are specified in the syntax selection productions, and which sometimes include looking ahead at the next sequential message, the system opts for a different syntactic form.The right-branching behavior of the system implies that at any point the system has the option to lay down a period and start a new ~ntence. It also implies that embedded subject-complement forms, such as relative ;5 ;6 ;7 clauses modifying subjects, are trickier to implement (and have not been implemented as yet). That embedded subject complements pose special difficulties should not be considered discouraging. Developmental linguistics research reveals that "operations on sentence subjects, including subject complementation and relative clauses modifying subjects" are among the last to appear in the acquisition of complex sentences, 7 and a knowledge-based report generator incorporates the basic mechanism for eventually matching messages to nominalizations of predicate phrases to create subject complements, as well as the mechanism for embedding relative clauses.How does one determine what knowledge must incorporated into a knowledge-based report generator? Because the goal of a knowledge-based report generator is to produce reports that are indistinguishable from reports written by people for the same database, it is logical to turn to samples of naturally generated text from the specific domain of discourse in order to gain insights into the semantic, linguistic, and rhetoric knowledge requirements of the report generator.Research in machine translation s and text understanding 9 has demonstrated that not only does naturally generated text disclose the lexicon and grammar of a sublanguage, but it also reveals the essential semantic classes and attributes of a domain of discourse, as well as the relations between those classes and attributes. Thus, samples of actual text may be used to build the phrasal dictionary for a report generator and to define the syntactic categories that a generator must have knowledge of. Similarly, the semantic classes, attributes and relations revealed in the text define the scope and variety of the semantic knowledge the system must incorporate in order to infer relevant and interesting messages from the database.Ana's phrasal lexicon consists of subjects, such as "wall street's securities markets", and predicates, such as "were swept into a broad and steep decline", which are extracted from the text of naturally generated stock reports, The syntactic categories Ann knows about are the clausal level categories that are found in the same text, such as, sentence, coordinate-sentence, subordinate-sentence, subordinate-participial-clause, prepositional-phrase, and others.Semantic analysis of a sample of natural text stock reports discloses that a hierarchy of approximately forty message classes accounts for nearly all of the semantic information contained in the "core market sentences" of stock reports. The term "core market sentences" was introduced by Kittredge to refer to those sentences which can be inferred from the data in the data base without reference to external events such as wars, strikes, and corporate or government policy making. 1° Thus, for example, Ana could say "Eastman Kodak advanced 2 3/4 to 85 3/4;" but it could not append "it announced development of the world's fastest color film for delivery in 1983.". Aria currently has knowledge of only six message classes. These include the closing market status message, the volume of trading message, and the mixed market message, the interesting market fluctuations message, the closing Dow status message, and the interesting Dow fluctuations message.The use of production systems for natural language processing was suggested as early as 1972 by Heidorn,ll whose production language NLP is currently being used for syntactic processing research. A production system for language understanding has been implemented in OPS5 by Frederking. 12 Many benefits are derived from using a production system to represent the knowledge required for text generation. Two of the more important advantages are the ability to integrate semantic, syntactic, and rhetoric knowledge, and the ability to extend and tailor the system easily.Knowledge integration is evident in the production rule displayed earlier for selecting the syntactic form of subordinate participial clause. In English, that production said: IF 1) there is an active goal to select a syntactic form 2) the sentence requirement has not been satisfied 3) the message currently in focus has topic <t>, subject class <sc>, and some non-nil time 4) the next sequential message has the same topic. subject class, and some non-nil time 5) the subordinate-participial-clause parameter is set at value <set> 6) the current random number is less than <set> 7) the last syntactic form used was either a prepositional phrase or a sentence initializer 8) the opening syntactic form of the last sentence was not a subordinate sentence or a subordinate participial clause 9) the time attribute of the message in focus does not have value 'close' THEN 1) remove the goal of selecting a syntactic form 2) make the current syntactic form a subordinate participial clause 3) modify the next sequential message to put it in peripheral focus 4) set a goal to select a subordinating conjunction.It should be apparent from the explanation that the rule integrates semantic knowledge, such as message topic and time, syntactic knowledge, such as whether the sentence requirement has been satisfied, and rhetoric knowledge, such as the preference to avoid using subordinate clauses as the opening form of two consecutive sentences.Conditions number 5 and 6, the syntactic form parameter and the random number, are examples of control elements that are used for syntactic tailoring. A syntactic form parameter may be preset at any value between 1 and 11 by the system user. A value of 8, for example, would result in an 80 percent chance that the rule in which the parameter occurs would be satisfied if all its other conditions were satisfied. Consequently, on 20 percent of the occasions when the rule would have been otherwise satisfied, the syntactic form parameter would prevent the rule from firing, and the system would be forced to opt for a choice of some other syntactic form. Thus, if the user prefers reports that are low on subordinate participial clauses, the subordinate participial clause parameter might be set at 3 or lower.The following production contains the bank of parameters as they were set to generate text sample (2) above:(p _ l.setparams (goal "stat act "op setparams) (remove 1) (make paramsyllables "val 30) (make parammessages "val 3) (make paramsynforms "sentence 11 "coorsent 11 "suborsent 11 "prepphrase 11 "suborsentpre 5 "suborpartpre 8 "suborsentpost 8 "suborpartpost 11 "subol'partsentpost I 1When sample text (1) was generated, all syntactic form parameters were set at 11. The first two parameters in the bank are rhetoric parameters. They control the maximum length of sentences in syllables (roughly) and in number of messages per sentence.Not only does production system knowledge representation allow syntactic tailoring, but it also permits semantic tailoring. Aria could be tailored to focus on particular stocks or groups of stocks to meet the information needs of individual users. Furthermore, a production system is readily extensible. Currently, Ana has only a small amount of general knowledge about the stock market and is far from a stock market expert. But any knowledge that can be made explicit can be added to the system prolonged incremental growth in the knowledge of the system could someday result in a system that truly is a stock market expert.The problem of dealing with the complexity of natural language is made much more tractable by working in macro-level knowledge constructs, such as semantic units consisting of whole messages, lexical iter-¢ ~,~,asisting of whole phrases, syntactic categories at the clause level, and a clause-combining grammar. Macrolevel processing buys linguistic fluency at the cost of semantic and linguistic flexibility. However, the loss of flexibility appears to be not much greater than the constraints imposed by the grammar and semantics of the sublanguage of the domain of discourse. Furthermore, there may be more to the notion of macro-level semantic and linguistic processing than mere computational manageability.The notion of a phrasal lexicon was suggested by Becker, 13 who proposed that people generate utterances "mostly by stitching together swatches of text that they have heard before. Wilensky and Arens have experimented with a phrasal lexicon in a language understanding system. 14 I believe that natural language behavior will eventually be understood in terms of a theory of stratified natural language processing in which macrolevel knowledge constructs, such as those used in a knowledge-based report generator, occur at one of the higher cognitive gtrata.A poor but useful analogy to mechanical gearshifting while driving a car can be drawn. Just as driving in third gear makes most efficient use of an automobile's resources, so also does generating language in third gear make most efficient use of human information processing resources. That is, matching whole phrases and applying a clause-combining grammar is cognitively economical. But when only a near match for a message can be found in a speaker's phrasal dictionary, the speaker must downshift into second gear, and either perform some additional processing on the nhrase to transform it into the desired form to match the message, or perform some processing on the message to transform it into one that matches the phrase. And if not even a near match for a message can be found, the speaker must downshift into first gear and either construct a phrase from elementary texicai items, including words, prefixes, and suffixes, or reconstruct the message.As currently configured, a knowledge-based text generator operates only in third gear. Because the units of processing are linguistically mature whole phrases, the report generation system can produce fluent text without having the detailed knowledge-needed to construct mature phrases from their elementary components. But there is nothing except the time and insight of a system implementor to prevent this detailed knowledge from being added to the system. By experimenting with additional knowledge, a system could gradually be extended to shift into lower gears, to exhibit greater interaction between semantic and linguistic components, and to do more flexible, if not creative, generation of semantic messages and linguistic phrases. A knowledge-based report generator may be viewed as a starting tool for modeling a stratiform theory of natural language processing.Knowledge-based report generation is practical because it tackles a moderately ill-defined problem with an effective technique, namely, a macro-level, knowledge-based, production system technique. Stock market reports are typical instances of a whole class of summary-type periodic reports for which the scope and variety of semantic and linguistic complexity is great enough to negate a straightforward algorithmic solution, but constrained enough to allow a high-level cross-wise slice of the variety of knowledge to be effectively incorporated into a production system. Even so, it will be some time before the technique is cost effective. The time required to add knowledge to a system is greater than the time required to add productions to a traditional expert system.Most of the time is spent doing semantic analysis for the purpose of creating useful semantic classes and attributes, and identifying the relations between them. Coding itself goes quickly, but then the system must be tested and calibrated (if the guesses on the semantics were close) or redone entirely (if the guesses were not close). Still, the initial success of the technique suggests its value both as a basic research tool, for exploring increasingly more detailed semantic and linguistic processes, and as an applied research tool, for designing extensible and tailorable automatic report generators.
null
null
null
null
Main paper: i. what is knowledge-based: REPORT GENERATION A knowledge-based report generator is a computer program whose function is to generate natural language summaries from computer databases. For example, knowledge-based report generators can be designed to generate daily stock market reports from a stock quotes database, daily weather reports from a meteorological database, weekly sales reports from corporate databases, or quarterly economic reports from U. S. Commerce Department databases, etc. A separate generator must be implemented for each domain of discourse because each knowledge-based report generator contains domain-specific knowledge which is used to infer interesting messages from the database and to express those messages in the sublanguage of the domain of discourse. The technique of knowledge-based report generation is generalizable across domains, however, and the actual text generation component of the report generator, which comprises roughly one-quarter of the code, is directly transportable and readily tailorable.Knowledge-based report generation is a practical approach to text generation. It's three fundamental tenets are the following. First, it assumes that much domain-specific semantic, linguistic, and rhetoric knowledge is required in order for a computer to automatically produce intelligent and fluent text. Second, it assumes that production system languages, such as those used to build expert systems, are wellsuited to the task of representing and integrating semantic, linguistic, and rhetoric knowledge. Finally, it holds that macro-level knowledge units, such as whole seman-tic messages, a phrasal lexicon, clausal grammatical categories, and a clause-combining grammar, provide an appropriate level of knowledge representation for generating that type of text which may be categorized as periodic summary reports. These three tenets guide the design and implementation of a knowledge-based report generation system.The first application of the technique of knowledge-based report generation is a partially implemented stock report generator called Aria. Data from a Dow Jones stock quotes database serves as input to the system, and the opening paragraphs of a stock market summary are produced as output. As more semantic and linguistic knowledge about the stock market is added to the system, it will be able to generate longer, more informative reports. (1) after climbing steadily through most of the morning , the stock market was pushed downhill late in the day stock prices posted a small loss , with the indexes turning in a mixed showing yesterday in brisk trading .the Dow Jones average of 30 industrials surrendered a 16.28 gain at 4pro and declined slightly , finishing the day at 1083.61 ,off 0.18 points. 2wall street's securities markets rose steadily through most of the morning , before sliding downhill late in the day the stock market posted a small loss yesterday , with the indexes finishing with mixed results in active trading .the Dow Jones average of 30 industrials surrendered a 16.28 gain at 4pro and declined slightly , to finish at 1083.61 , off 0.18 points .In order to generate accurate and fluent summaries, a knowledge-based report generator performs two main tasks: first, it infers semantic messages from the data in the database; second, it maps those messages into phrases in its phrasal lexicon, stitching them together according to the rules of its clause-combining grammar, and incorporating rhetoric constraints in the process. As the work of McKeown I and Mann and Moore 2 demonstrates, neither the problem of deciding what to say nor the problem of determining how to say it is trivial, and as'Appelt 3 has pointed out, the distinction between them is not always clear.A knowledge-based report generator consists of the following four independent, sequential components: 1) a fact generator, 2) a message generator, 3) a discourse organizer, and 4) a text generator. Data from the database serves as input to the first module, which produces a stream of facts as output; facts serve as input to the second module, which produces a set of messages as out-put; messages form the input to the third module, which organizes them and produces a set of ordered messages as output; ordered messages form the input to the fourth module, which produces final text as output. The modules function independently and sequentially for the sake of computational manageability at the expense of psychological validity.With the exception of the first module, which is a straightforward C program, the entire system is coded in the OPS5 production system language. 4 At the time that the sample output above was generated, module 2, the message generator, consisted of 120 production rules; module 3, the discourse organizer contained 16 production rules; and module 4, the text generator, included 109 production rules and a phrasal dictionary of 519 entries. Real time processing requirements for each module on a lightly loaded VAX 11/780 processor were the following: phase 1 16 seconds, phase 2 -34 seconds, phase 3 -24 seconds, phase 4 -1 minute, 59 seconds.The fundamental knowledge constructs of the system are of two types: 1) static knowledge structures, or memory elements, which can be thought of as ndimensional propositions, and 2) dynamic knowledge structures, or production rules, which perform patternrecognition operations on n-dimensional propositions, Static knowledge structures come in five flavors: facts. messages, lexicon entries, medial text elements, and various control elements. Dynamic knowledge constructs occur in ten varieties: inference productions, ordering productions, discourse mechanics productions, phrase selection productions, syntax selection productions, anaphora selection productions, verb morphology productions, punctuation selection productions, writing productions, and various control productions.The function of the first module is to perform the arithmetic computation required to produce facts that contain the relevant information needed to infer interesting messages, and to write those facts in the OPS5 memory element format. For example, the fact that indicates the closing status of the Dow Jones Average of 30 Industrials for January 12, 1983 is:(make fact "fname CLb-~rAT "iname DJI "itype COMPOS "date 01/12 "hour CLOSE "openlevel 1084.25 "high-level 1105.13 "low-level 1075.88 "close-level 1083.61 "cumul-dir DN "cumul-deg 0.18)The function of the second module is to inter interesting messages from the facts using inferencing productions such as the following:(p instan-mixedup (goal "stat act "op instanmixed) (fact "(name CLSTAT "iname DJI "cumul-dir UP "repdate <date>) (fact "(name ADVDEC "iname NYSE "advances <x> "declines {<y> > <x>}) (make message "top GENMKT "subtop MIX "mix mixed "repdate <date> "subjclass MKT "tim close) (make goal "star pend "op writemessage) (remove 1) )This production infers that if the closing status of the Dow had a direction of "up', and yet the number of declines exceeded the number of advances for the day, then it can be said that the market was mixed. The message that is produced looks like this:(make message "repdate 01/12 "top GENMKT "subsubtop nil "subtop MIX "subjclass MKT "dir nil "deg nil "vardeg I nil ] "varlev I nil [ "mix mixed "chg nil "sco nil "tim close "vartim I nil i "dur nil "vol nil "who nil ) The inferencing process in phase 2 is hierarchically controlled.Module 3 performs the uncomplicated task of grouping messages into paragraphs, ordering messages within paragraphs, and assigning a priority number to each message. Priorities are assigned as a function of topic and subtopic. The system "knows" a default ordering sequence, and it "knows" some exception rules which assign higher priorities to messages of special significance, such as indicators hitting record highs. As in module 2, processing is hierarchically controlled. Eventually, modules 2 and 3 should be combined so that their knowledge could be shared.The most complicated processing is performed by module 4. This processing is not hierarchically controlled, but instead more closely resembles control in an ATN. Module 4, the text generator, coordinates and executes the following activities: 1) selection of phrases from the phrasal lexicon that both capture the semantic meaning of the message and satisfy rhetorical constraints; 2) selection of appropriate syntactic forms for predicate phrases, such as sentence, participial clause, prepositional phrase, etc.; 3) selection of appropriate anaphora for subject phrases 4) morphological processing of verbs; 5) interjection of appropriate punctuation; and 6) control of discourse mechanics, such as inclusion of more than one clause per sentence and more than one sentence per paragraph.The module 4 processor is able to coordinate and execute these activities because it incorporates and integrates the semantic, syntactic, and rhetoric knowledge it needs into its static and dynamic knowledge structures. For example, a phrasal lexicon entry that might match the "mixed market" message is the following:(make phraselex "top GENMKT "subtop MIX "mix mixed "chg nil "tim close "subjtype NAME "subjclass MKT *predfs turned Apredfpl turned "predpart turning "predinf ~to turnl ^predrem ~n a mixed showing] "fen 9 "rand 5 "imp 11) An example of a syntax selection production th,tt would select the syntactic form subordinate-participial-clause as an appropriate form for a phrase (a~) in "after rising steadily through most of the morning") is the following:(p 5.selectsu borpartpre-selectsyntax (goal ^stat act "op selectsyntax); 1 (sentreq "sentstat nil) ; 2 (message "foc in "top <t> "tim <> nil "subjclass <sc>) ; 3 (message "foc nil "top <t> "tim <> nil "subjclass <sc>) ; 4 (paramsynforms "suborpartpre <set>) (randnum "randval < <set>) (lastsynform "form << initsent prepp >> ) -(openingsynform "form < < suborsent suborpart > >) -(message "foc in "tim close) For each message, in sequence, the system first selects a predicate phrase that matches the semantic content of the message, and next selects a syntactic form. such as sentence or prepositional phrase, into which form the predicate phrase may be hammered. The system's default goal is to form complex sentences by combining a variable number of messages expressed m a variety of syntactic forms in each sentence. Every message may be expressed in the syntactic form of a simple sentence. But under certain grammatical and rhetorical conditions, which are specified in the syntax selection productions, and which sometimes include looking ahead at the next sequential message, the system opts for a different syntactic form.The right-branching behavior of the system implies that at any point the system has the option to lay down a period and start a new ~ntence. It also implies that embedded subject-complement forms, such as relative ;5 ;6 ;7 clauses modifying subjects, are trickier to implement (and have not been implemented as yet). That embedded subject complements pose special difficulties should not be considered discouraging. Developmental linguistics research reveals that "operations on sentence subjects, including subject complementation and relative clauses modifying subjects" are among the last to appear in the acquisition of complex sentences, 7 and a knowledge-based report generator incorporates the basic mechanism for eventually matching messages to nominalizations of predicate phrases to create subject complements, as well as the mechanism for embedding relative clauses.How does one determine what knowledge must incorporated into a knowledge-based report generator? Because the goal of a knowledge-based report generator is to produce reports that are indistinguishable from reports written by people for the same database, it is logical to turn to samples of naturally generated text from the specific domain of discourse in order to gain insights into the semantic, linguistic, and rhetoric knowledge requirements of the report generator.Research in machine translation s and text understanding 9 has demonstrated that not only does naturally generated text disclose the lexicon and grammar of a sublanguage, but it also reveals the essential semantic classes and attributes of a domain of discourse, as well as the relations between those classes and attributes. Thus, samples of actual text may be used to build the phrasal dictionary for a report generator and to define the syntactic categories that a generator must have knowledge of. Similarly, the semantic classes, attributes and relations revealed in the text define the scope and variety of the semantic knowledge the system must incorporate in order to infer relevant and interesting messages from the database.Ana's phrasal lexicon consists of subjects, such as "wall street's securities markets", and predicates, such as "were swept into a broad and steep decline", which are extracted from the text of naturally generated stock reports, The syntactic categories Ann knows about are the clausal level categories that are found in the same text, such as, sentence, coordinate-sentence, subordinate-sentence, subordinate-participial-clause, prepositional-phrase, and others.Semantic analysis of a sample of natural text stock reports discloses that a hierarchy of approximately forty message classes accounts for nearly all of the semantic information contained in the "core market sentences" of stock reports. The term "core market sentences" was introduced by Kittredge to refer to those sentences which can be inferred from the data in the data base without reference to external events such as wars, strikes, and corporate or government policy making. 1° Thus, for example, Ana could say "Eastman Kodak advanced 2 3/4 to 85 3/4;" but it could not append "it announced development of the world's fastest color film for delivery in 1983.". Aria currently has knowledge of only six message classes. These include the closing market status message, the volume of trading message, and the mixed market message, the interesting market fluctuations message, the closing Dow status message, and the interesting Dow fluctuations message.The use of production systems for natural language processing was suggested as early as 1972 by Heidorn,ll whose production language NLP is currently being used for syntactic processing research. A production system for language understanding has been implemented in OPS5 by Frederking. 12 Many benefits are derived from using a production system to represent the knowledge required for text generation. Two of the more important advantages are the ability to integrate semantic, syntactic, and rhetoric knowledge, and the ability to extend and tailor the system easily.Knowledge integration is evident in the production rule displayed earlier for selecting the syntactic form of subordinate participial clause. In English, that production said: IF 1) there is an active goal to select a syntactic form 2) the sentence requirement has not been satisfied 3) the message currently in focus has topic <t>, subject class <sc>, and some non-nil time 4) the next sequential message has the same topic. subject class, and some non-nil time 5) the subordinate-participial-clause parameter is set at value <set> 6) the current random number is less than <set> 7) the last syntactic form used was either a prepositional phrase or a sentence initializer 8) the opening syntactic form of the last sentence was not a subordinate sentence or a subordinate participial clause 9) the time attribute of the message in focus does not have value 'close' THEN 1) remove the goal of selecting a syntactic form 2) make the current syntactic form a subordinate participial clause 3) modify the next sequential message to put it in peripheral focus 4) set a goal to select a subordinating conjunction.It should be apparent from the explanation that the rule integrates semantic knowledge, such as message topic and time, syntactic knowledge, such as whether the sentence requirement has been satisfied, and rhetoric knowledge, such as the preference to avoid using subordinate clauses as the opening form of two consecutive sentences.Conditions number 5 and 6, the syntactic form parameter and the random number, are examples of control elements that are used for syntactic tailoring. A syntactic form parameter may be preset at any value between 1 and 11 by the system user. A value of 8, for example, would result in an 80 percent chance that the rule in which the parameter occurs would be satisfied if all its other conditions were satisfied. Consequently, on 20 percent of the occasions when the rule would have been otherwise satisfied, the syntactic form parameter would prevent the rule from firing, and the system would be forced to opt for a choice of some other syntactic form. Thus, if the user prefers reports that are low on subordinate participial clauses, the subordinate participial clause parameter might be set at 3 or lower.The following production contains the bank of parameters as they were set to generate text sample (2) above:(p _ l.setparams (goal "stat act "op setparams) (remove 1) (make paramsyllables "val 30) (make parammessages "val 3) (make paramsynforms "sentence 11 "coorsent 11 "suborsent 11 "prepphrase 11 "suborsentpre 5 "suborpartpre 8 "suborsentpost 8 "suborpartpost 11 "subol'partsentpost I 1When sample text (1) was generated, all syntactic form parameters were set at 11. The first two parameters in the bank are rhetoric parameters. They control the maximum length of sentences in syllables (roughly) and in number of messages per sentence.Not only does production system knowledge representation allow syntactic tailoring, but it also permits semantic tailoring. Aria could be tailored to focus on particular stocks or groups of stocks to meet the information needs of individual users. Furthermore, a production system is readily extensible. Currently, Ana has only a small amount of general knowledge about the stock market and is far from a stock market expert. But any knowledge that can be made explicit can be added to the system prolonged incremental growth in the knowledge of the system could someday result in a system that truly is a stock market expert.The problem of dealing with the complexity of natural language is made much more tractable by working in macro-level knowledge constructs, such as semantic units consisting of whole messages, lexical iter-¢ ~,~,asisting of whole phrases, syntactic categories at the clause level, and a clause-combining grammar. Macrolevel processing buys linguistic fluency at the cost of semantic and linguistic flexibility. However, the loss of flexibility appears to be not much greater than the constraints imposed by the grammar and semantics of the sublanguage of the domain of discourse. Furthermore, there may be more to the notion of macro-level semantic and linguistic processing than mere computational manageability.The notion of a phrasal lexicon was suggested by Becker, 13 who proposed that people generate utterances "mostly by stitching together swatches of text that they have heard before. Wilensky and Arens have experimented with a phrasal lexicon in a language understanding system. 14 I believe that natural language behavior will eventually be understood in terms of a theory of stratified natural language processing in which macrolevel knowledge constructs, such as those used in a knowledge-based report generator, occur at one of the higher cognitive gtrata.A poor but useful analogy to mechanical gearshifting while driving a car can be drawn. Just as driving in third gear makes most efficient use of an automobile's resources, so also does generating language in third gear make most efficient use of human information processing resources. That is, matching whole phrases and applying a clause-combining grammar is cognitively economical. But when only a near match for a message can be found in a speaker's phrasal dictionary, the speaker must downshift into second gear, and either perform some additional processing on the nhrase to transform it into the desired form to match the message, or perform some processing on the message to transform it into one that matches the phrase. And if not even a near match for a message can be found, the speaker must downshift into first gear and either construct a phrase from elementary texicai items, including words, prefixes, and suffixes, or reconstruct the message.As currently configured, a knowledge-based text generator operates only in third gear. Because the units of processing are linguistically mature whole phrases, the report generation system can produce fluent text without having the detailed knowledge-needed to construct mature phrases from their elementary components. But there is nothing except the time and insight of a system implementor to prevent this detailed knowledge from being added to the system. By experimenting with additional knowledge, a system could gradually be extended to shift into lower gears, to exhibit greater interaction between semantic and linguistic components, and to do more flexible, if not creative, generation of semantic messages and linguistic phrases. A knowledge-based report generator may be viewed as a starting tool for modeling a stratiform theory of natural language processing.Knowledge-based report generation is practical because it tackles a moderately ill-defined problem with an effective technique, namely, a macro-level, knowledge-based, production system technique. Stock market reports are typical instances of a whole class of summary-type periodic reports for which the scope and variety of semantic and linguistic complexity is great enough to negate a straightforward algorithmic solution, but constrained enough to allow a high-level cross-wise slice of the variety of knowledge to be effectively incorporated into a production system. Even so, it will be some time before the technique is cost effective. The time required to add knowledge to a system is greater than the time required to add productions to a traditional expert system.Most of the time is spent doing semantic analysis for the purpose of creating useful semantic classes and attributes, and identifying the relations between them. Coding itself goes quickly, but then the system must be tested and calibrated (if the guesses on the semantics were close) or redone entirely (if the guesses were not close). Still, the initial success of the technique suggests its value both as a basic research tool, for exploring increasingly more detailed semantic and linguistic processes, and as an applied research tool, for designing extensible and tailorable automatic report generators. Appendix:
null
null
null
null
{ "paperhash": [ "keown|the_text_system_for_natural_language_generation:_an_overview", "wilensky|phran_-_a_knowledge-based_natural_language_understander", "appelt|problem_solving_applied_to_language_generation", "moore|a_snapshot_of_kds._a_knowledge_delivery_system", "becker|the_phrasal_lexicon", "heidorn|natural_language_inputs_to_a_simulation_programming_system:_an_introduction", "mckeown|the_text_system_for_natural_language_generation:_an_overview", "mckeown|generating_natural_language_text_in_response_to_questions_about_database_structure" ], "title": [ "THE TEXT SYSTEM FOR NATURAL LANGUAGE GENERATION: AN OVERVIEW", "PHRAN - A Knowledge-Based Natural Language Understander", "Problem Solving Applied to Language Generation", "A Snapshot of KDS. A Knowledge Delivery System", "The Phrasal Lexicon", "Natural language inputs to a simulation programming system: An introduction", "The Text System for Natural Language Generation: an Overview", "Generating natural language text in response to questions about database structure" ], "abstract": [ "Computer-based generation of natural language requires consideration of two different types of problems: 1) determining the content and textual shape of what is to be said, and 2) transforming that message into English. A computational solution to the problems of deciding what to say and how to organize it effectively is proposed that relies on an interaction between structural and semantic processes. Schemas, which encode aspects of discourse structure, are used to guide the generation process. A focusing mechanism monitors the use of the schemas, providing constraints on what can be said at any point. These mechanisms have been implemented as part of a generation method within the context of a natural language database system, addressing the specific problem of responding to questions about database structure.", "We have developed an approach to natural language processing in which the natural language processor is viewed as a knowledge-based system whose knowledge is about the meanings of the utterances of its language. The approach is oriented around the phrase rather than the word as the basic unit. We believe that this paradigm for language processing not only extends the capabilities of other natural language systems, but handles those tasks that previous systems could perform in a more systematic and extensible manner.We have constructed a natural language analysis program called PHRAN (PHRasal ANalyzer) based in this approach. This model has a number of advantages over existing systems, including the ability to understand a wider variety of language utterances, increased processing speed in some cases, a clear separation of control structure from data structure, a knowledge base that could be shared by a language production mechanism, greater ease of extensibility, and the ability to store some useful forms of knowledge that cannot readily be added to other systems.", "This research was supported at SRI International by the Defense Advanced Research Projects Agency under contract N00039--79--C--0118 with the Naval Electronic Systems Command. The views and conclusions contained in this document are those of the author and should not be interpreted as representative of the official policies either expressed or implied of the Defense Advanced Research Projects Agency, or the U. S. Government. The author is grateful to Barbara Grosz, Gary Hendrix and Terry Winograd for comments on an earlier draft of this paper.", "SUMMARY KDS Is a computer program which creates multl-par~raph, Natural Language text from a computer representation of knowledge to be delivered. We have addressed a number of Issues not previously encountered In the generation of Natural Language st the multi-sentence level, vlz: ordering among sentences and the scope of each, quality comparisons between alternative 8~regations of sub-sententJal units, the coordination of communication", "Theoretical linguists have in recent years concentrated their attention on the productive aspect of language, wherein utterances are formed combinatorically from units the size of words or smaller. This paper will focus on the contrary aspect of language, wherein utterances are formed by repetition, modification, and concatenation of previously-known phrases consisting of more than one word. I suspect that we speak mostly by stitching together swatches of text that we have heard before; productive processes have the secondary role of adapting the old phrases to the new situation. The advantage of this point of view is that it has the potential to account for the observed linguistic behavior of native speakers, rather than discounting their actual behavior as irrelevant to their language. In particular, this point of view allows us to concede that most utterances are produced in stereotyped social situations, where the communicative and ritualistic functions of language demand not novelty, but rather an appropriate combination of formulas, cliches, idioms, allusions, slogans, and so forth. Language must have originated in such constrained social contexts, and they are still the predominant arena for language production. Therefore an understanding of the use of phrases is basic to the understanding of language as a whole.You are currently reading a much-abridged version of a paper that will be published elsewhere later.", "A simulation programming system with which models for simple queuing problems can be built through naturallanguage interaction with a computer is described. In this system the English statement of a problem is first translated into a language -independent entity-attribute-value information structure, which can then be translated back into an equivalent English description and into a GPSS simulation program for the problem. This processing is done on an IBM 360/67 by a FORTRAN program which is guided by a set of stratified decoding and encoding rules written in a grammar-rule language developed for this system. A detailed example of the use of the system is included. This task was supported by the Information Systems Program of the Office of Naval Research as Project NR 049314, under Project Order PO 1-0177. The facilities of the W.R. Church Computer Center were utilized for this research.", "Computer-based generation of natural language requires consideration of two different types of problems: i) determining the content and textual shape of what is to be said, and 2) transforming that message into English. A computational solution to the problems of deciding what to say and how to organize it effectively is proposed that relies on an interaction between structural and semantic processes. Schemas, which encode aspects of discourse structure, are used to guide the generation process. A focusing mechanism monitors the use of the schemas, providing constraints on what can be said at any point. These mechanisms have been implemented as part of a generation method within the context of a natural language database system, addressing the specific problem of responding to questions about", "There are two major aspects of computer-based text generation: (1) determining the content and textual shape of what is to be said; and (2) transforming that message into natural language. Emphasis in this research has been on a computational solution to the questions of what to say and how to organize it effectively. A generation method was developed and implemented in a system called TEXT that uses principles of discourse structure, discourse coherency, and relevancy criterion. \nThe main features of the generation method developed for the TEXT strategic component include (1) selection of relevant information for the answer, (2) the pairing of rhetorical techniques for communication (such as analogy) with discourse purposes (for example, providing definitions) and (3) a focusing mechanism. Rhetorical techniques, which encode aspects of discourse structure, are used to guide the selection of propositions from a relevant knowledge pool. The focusing mechanism aids in the organization of the message by constraining the selection of information to be talked about next to that which ties in with the previous discourse in an appropriate way. \nThis work on generation has been done within the framework of a natural language interface to a database system. The implemented system generates responses of paragraph length to questions about database structure. Three classes of questions have been considered: questions about information available in the database, requests for definitions, and questions about the differences between database entities. \nThe main theoretical results of this research have been on the effect of discourse structure and focus constraints on the generation process. A computational treatment of rhetorical devices has been developed which is used to guide the generation process. Previous work on focus of attention has been extended for the task of generation to provide constraints on what to say next. The use of these two interacting mechanisms constitutes a departure from earlier generation systems. The approach taken in this research is that the generation process should not simply trace the knowledge representation to produce text. Instead, communicative strategies people are familiar with are used to effectively convey information. This means that the same information may be described in different ways on different occasions." ], "authors": [ { "name": [ "Kathleen Keown" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Wilensky", "Y. Arens" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Appelt" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "James A. Moore", "W. Mann" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Joseph D. Becker" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "George E. Heidorn" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "K. McKeown" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "K. McKeown" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "61208346", "16980721", "436199", "41559124", "3919430", "60214583", "17676483", "62743223" ], "intents": [ [ "background" ], [], [], [ "methodology" ], [], [], [ "background" ], [] ], "isInfluential": [ false, false, false, false, false, false, false, false ] }
Problem: The paper addresses the technique of Knowledge-Based Report Generation, which involves automatically generating natural language reports from computer databases using knowledge-based expert systems software. Solution: The hypothesis of the paper is that by utilizing domain-specific semantic and linguistic knowledge, macro-level semantic and linguistic constructs, and a production system approach to knowledge representation, a practical approach to text generation can be achieved through the implementation of a knowledge-based report generation system.
500
0.41
null
null
null
null
null
null
null
null
7b17d71d49739f08f5ed48dbd45f78fef1ba1ab0
129175
null
A Finite-State Parser for Use in Speech Recognition
This paper is divided into two parts. 1 The first section motivates the application of finite-state parsing techniques at the phonetic level in order to exploit certain classes or" contextual constraints. -In the second section, the parsing framework is extended in order to account ['or 'feature spreading' (i:.g., agreement and co-articulation) in a natural way.
{ "name": [ "Church, Kenneth W." ], "affiliation": [ null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
28
10
null
null
null
A program has bcen implcmcntcd [41 which parses a lattice of phonetic segmcnts into a lattice of syllables and other phonological constituents. Except for its novcl mechanism for handling features, it is very much like a standard chart parser (e.g.. Earley's Algorithm lTD. P, ccall that a chart parser takes as input a sentence and a context-free grammar and produces as output a chart like that below, indicating the starting point and ending point of each phrase in the input string. The agreement problem also arises in phonology. Consider the example of homorganic nasal clusters (e.g., cam2II2, can't, sank), where the nasal agrees with the following obstruent in place of articulation.That is, the labial nasal /m/ is found before the labial stop /p/, the cor9nal nasal/n/ before the coronal stop/t/, and the velar nasal/7// before the velar stop/k/. This constraint, like subject-verb agreement.poses a problem for pure unaugmented context-free rules; it seems to be necessary to expand out each of the three cases:(13a) homorganic-nasal-cluster ~ labial-nasal labial-obstruent (13b) homorganie-nasal-cluster ~ coronal-nasal coronal-obstruent (13c) homorganic-nasal-cluster---* velar-nasal velar-obstruentIn an effort to alleviate this expansion problem, many researchers have proposed augmentations of various sorts (e.g., ATN registers [26] , LFG constraint equations [16] , GPSG recta-rules till, local constraints [18] , bit vectors [6, 22] ). My own solution will be suggested after I have had a chance to describe the parser in further detail.This scction will show how the grammar can be implemented in terms of operations on binary matrices. Suppose that the chart is decomposed into a sum of binary matrices:(14) Chart = syl Msy I + onset Monse t + peak Mpeak + .,.where Msy I is a binary matrix 8 describing the location of syllables and Monse t is a binary matrix describing the location of onsets, and so forth.Each of these binary matrices has a I in position (i,j) if there is a constituent of the appropriate part of speech spanning from the i m position in the input sentence to the jth position.9 (See figure 3).Ph'rase-structure rules will be implemented with simple operations on these binary matrices. For example, the homorganic rule (13) could be implemented as:8. Fhese matnccs will sometimes be called segmentatton lattices for historical reasons. Techmcally. these matnc~ need not conform to the restrictions of a lattice, and therefore, the weaker term graph L~ more correcL 9 In a probabitisuc framework, one could replace all of the I's and 0's with probabdities. A high prohabdity m loeauon (i. j~ of the s),liable matnx would say that there probably is a ss'llahle from postuon t to position 1: a low probabdity would say that there probably isn't a syllable between i and 1. Most of the following apphcs to probabdity matrices welt as binary ntawices, though the probabdity matnces may be less sparse and consequently less efficient. 001100 010000 000000 001100 000000 001100 000011 000100 000000 000011 000001 000011 000000 000000 000000 000000 000000 000000The matrices tend to be very sparse (ahnost entirely full of 0's) because syllable grammars are highly constrained. In principle, there could be n 2 entries. However, it can be shown that e (the number of l's) is linearly related to n because syllables have finite length. In Church [4] , I sharpen this result by arguing that e tends to be bounded by 4n as a consequence ofa phonotactic principle known as sonority. Many more edges will be ruled out by a number of other linguistic constraints mentioned above: voicing and place assimilation, aspiration, flapping. etc. In short, these mamces are sparse because allophonic and phonotactic constraints are useful where M& (element-wise intersection) implements the subject to constraint. Nasal-cluster and place-assimilation are defined as: the parser can process homorganic nasal clusters by processing place and manner phrases in parallel, and then synchronizing the results at the coda node with M&. That is, (17a) can be computed in parallel with (17b). mid then the rcsulLs are aligned whcn the coda is computed with (16) , as illustrated below for the word tent. Imagine that the front end produces the following analysis: This parser is a bold departure from a standard practice in two respects:(1) the input stream is feature-based rather than segmental, and (2) the output parse is a heterarchy of overlapping constituents (e.g., place and
It is well known that phonemcs have different acoustic/phonetic realizations depending on the context. Fur example, the phoneme/t/ is typically realized with a different allophone (phonetic variant) in syllable initial position than in syllable final position. In syllable initial position (e.g., Tom),/t/is almost always released (with a strong burst of energy) and aspirated (with h-like noise), whereas in syllable final position (e.g., cat.), /t/ is often unreleased and unaspirated_ It is common practice in speech research to distinguish acoustic/phonetic properties that vary a great deal with context (e.g., release and aspiration) from those that are relatively invariant to context (e.g., place, manner and voicing). 2 In the past, the emphasis has been on invariants; allophonic variation is traditionally seen as problematic for recognition.(I) "In most systems for sentence recognition, such modifications must be viewed as a kind of 'noise' that makes it more difficult to hypothesize lexical candidates given an input phonetic transcription. To see that this must be the case, we note that each phonological rule [in an example to be presented below] This evidence suggests that allophonic variation provides a tich source of constraints on syllable structure and word stress. The recognizer to be discussed here (and partly tmplcmented in Church [4] ) is designed to exploit allophonic and phonotactic cues by parsing the input utterance into syllables and other suprasegmental constituents using phrasestructure parsing techniques.
It might be helpful to work out an example it] order to illustrate how parsing can play a role in l.exica] retrieval. Consider the phonetic transcription, mentioned above in the citation from Klatt [20, p. 1346] [2], pp. 548-549J:(3)[dD~hlf_lt) tam]It is desired to decode (3) into the string ofwords: (4) Did you hit it to Tom?In practice, the lexical retrieval problem is complicated by errors in the front cad. However, even with an ideal error-free front-end, it is difficult to decode ( that it is) then it seems F~atural to propose a syllabic parser fi)r proccssit~g speech, by analogy with sentence parsers that have bccome standard practicc in d~e natural laoguagc community for processing .~ext.
Main paper: an example of lexical retrieval: It might be helpful to work out an example it] order to illustrate how parsing can play a role in l.exica] retrieval. Consider the phonetic transcription, mentioned above in the citation from Klatt [20, p. 1346] [2], pp. 548-549J:(3)[dD~hlf_lt) tam]It is desired to decode (3) into the string ofwords: (4) Did you hit it to Tom?In practice, the lexical retrieval problem is complicated by errors in the front cad. However, even with an ideal error-free front-end, it is difficult to decode ( that it is) then it seems F~atural to propose a syllabic parser fi)r proccssit~g speech, by analogy with sentence parsers that have bccome standard practicc in d~e natural laoguagc community for processing .~ext. parser implementation and feature spreading: A program has bcen implcmcntcd [41 which parses a lattice of phonetic segmcnts into a lattice of syllables and other phonological constituents. Except for its novcl mechanism for handling features, it is very much like a standard chart parser (e.g.. Earley's Algorithm lTD. P, ccall that a chart parser takes as input a sentence and a context-free grammar and produces as output a chart like that below, indicating the starting point and ending point of each phrase in the input string. The agreement problem also arises in phonology. Consider the example of homorganic nasal clusters (e.g., cam2II2, can't, sank), where the nasal agrees with the following obstruent in place of articulation.That is, the labial nasal /m/ is found before the labial stop /p/, the cor9nal nasal/n/ before the coronal stop/t/, and the velar nasal/7// before the velar stop/k/. This constraint, like subject-verb agreement.poses a problem for pure unaugmented context-free rules; it seems to be necessary to expand out each of the three cases:(13a) homorganic-nasal-cluster ~ labial-nasal labial-obstruent (13b) homorganie-nasal-cluster ~ coronal-nasal coronal-obstruent (13c) homorganic-nasal-cluster---* velar-nasal velar-obstruentIn an effort to alleviate this expansion problem, many researchers have proposed augmentations of various sorts (e.g., ATN registers [26] , LFG constraint equations [16] , GPSG recta-rules till, local constraints [18] , bit vectors [6, 22] ). My own solution will be suggested after I have had a chance to describe the parser in further detail.This scction will show how the grammar can be implemented in terms of operations on binary matrices. Suppose that the chart is decomposed into a sum of binary matrices:(14) Chart = syl Msy I + onset Monse t + peak Mpeak + .,.where Msy I is a binary matrix 8 describing the location of syllables and Monse t is a binary matrix describing the location of onsets, and so forth.Each of these binary matrices has a I in position (i,j) if there is a constituent of the appropriate part of speech spanning from the i m position in the input sentence to the jth position.9 (See figure 3).Ph'rase-structure rules will be implemented with simple operations on these binary matrices. For example, the homorganic rule (13) could be implemented as:8. Fhese matnccs will sometimes be called segmentatton lattices for historical reasons. Techmcally. these matnc~ need not conform to the restrictions of a lattice, and therefore, the weaker term graph L~ more correcL 9 In a probabitisuc framework, one could replace all of the I's and 0's with probabdities. A high prohabdity m loeauon (i. j~ of the s),liable matnx would say that there probably is a ss'llahle from postuon t to position 1: a low probabdity would say that there probably isn't a syllable between i and 1. Most of the following apphcs to probabdity matrices welt as binary ntawices, though the probabdity matnces may be less sparse and consequently less efficient. 001100 010000 000000 001100 000000 001100 000011 000100 000000 000011 000001 000011 000000 000000 000000 000000 000000 000000The matrices tend to be very sparse (ahnost entirely full of 0's) because syllable grammars are highly constrained. In principle, there could be n 2 entries. However, it can be shown that e (the number of l's) is linearly related to n because syllables have finite length. In Church [4] , I sharpen this result by arguing that e tends to be bounded by 4n as a consequence ofa phonotactic principle known as sonority. Many more edges will be ruled out by a number of other linguistic constraints mentioned above: voicing and place assimilation, aspiration, flapping. etc. In short, these mamces are sparse because allophonic and phonotactic constraints are useful where M& (element-wise intersection) implements the subject to constraint. Nasal-cluster and place-assimilation are defined as: the parser can process homorganic nasal clusters by processing place and manner phrases in parallel, and then synchronizing the results at the coda node with M&. That is, (17a) can be computed in parallel with (17b). mid then the rcsulLs are aligned whcn the coda is computed with (16) , as illustrated below for the word tent. Imagine that the front end produces the following analysis: This parser is a bold departure from a standard practice in two respects:(1) the input stream is feature-based rather than segmental, and (2) the output parse is a heterarchy of overlapping constituents (e.g., place and i. parsing at the phonetic level: It is well known that phonemcs have different acoustic/phonetic realizations depending on the context. Fur example, the phoneme/t/ is typically realized with a different allophone (phonetic variant) in syllable initial position than in syllable final position. In syllable initial position (e.g., Tom),/t/is almost always released (with a strong burst of energy) and aspirated (with h-like noise), whereas in syllable final position (e.g., cat.), /t/ is often unreleased and unaspirated_ It is common practice in speech research to distinguish acoustic/phonetic properties that vary a great deal with context (e.g., release and aspiration) from those that are relatively invariant to context (e.g., place, manner and voicing). 2 In the past, the emphasis has been on invariants; allophonic variation is traditionally seen as problematic for recognition.(I) "In most systems for sentence recognition, such modifications must be viewed as a kind of 'noise' that makes it more difficult to hypothesize lexical candidates given an input phonetic transcription. To see that this must be the case, we note that each phonological rule [in an example to be presented below] This evidence suggests that allophonic variation provides a tich source of constraints on syllable structure and word stress. The recognizer to be discussed here (and partly tmplcmented in Church [4] ) is designed to exploit allophonic and phonotactic cues by parsing the input utterance into syllables and other suprasegmental constituents using phrasestructure parsing techniques. Appendix:
null
null
null
null
{ "paperhash": [ "fujimura|temporal_organization_of_articulatory_movements_as_a_multidimensional_phrasal_structure", "joshi|phrase_structure_trees_bear_more_fruit_than_you_would_have_thought", "cook|word_verification_in_a_speech_understanding_system", "denes|speech_recognition_by_machine:_a_review", "ffitch|course_notes", "henisz-dostert|how_features_resolve_syntactic_ambiguity", "earley|an_efficient_context-free_parsing_algorithm", "earley|an_efficient_context-free_parsing_algorithm", "church|on_memory_limitations_in_natural_language_processing", "chomsky|the_sound_pattern_of_english", "fry|duration_and_intensity_as_physical_correlates_of_linguistic_stress" ], "title": [ "Temporal Organization of Articulatory Movements as a Multidimensional Phrasal Structure", "Phrase Structure Trees Bear More Fruit Than You Would Have Thought", "Word verification in a speech understanding system", "Speech recognition by machine: A review", "Course Notes", "How features resolve syntactic ambiguity", "An efficient context-free parsing algorithm", "An efficient context-free parsing algorithm", "On memory limitations in natural language processing", "The Sound Pattern of English", "Duration and Intensity as Physical Correlates of Linguistic Stress" ], "abstract": [ "Abstract Recently obtained data from X-ray microbeam experiments indicate inherently multidimensional articulatory phenomena with respect to temporal characteristics of speech. Elementary gestures in different articulatory dimensions for phonetic elements, typically representing demisyllabic transitions, constitute the content of a phrasal frame.", "In this paper we will present several results concerning phrase structure trees. These results show that phrase structure trees, when viewed in certain ways, have much more descriptive power than one would have thought. We have given a brief account of local constraints on structural descriptions and an intuitive proof of a theorem about local constraints. We have compared the local constraints approach to some aspects of Gazdar's framework and that of Peters and Ritchie and of Karttunen. We have also presented some results on skeletons (phrase structure trees without labels) which show that phrase structure trees, even when deprived of the labels, retain in a certain sense all the structural information. This result has implications for grammatical inference procedures.", "If, in a speech understanding system, word matching is performed at the phonetic level, then the accurate determination of the locations and identities of words present in an unknown utterance is necessarily limited by the phonetic segmentation and labeling. Verification offers an alternative strategy by doing a top-down parametric word match independent of segmentation and labeling. The result is a distance measure between the reference parameterization of a hypothesized word and the computed parameterization of the real speech. This distance is interpreted as the likelihood of that word having actually occurred over a given portion of the utterance.", "This paper provides a review of recent developments in speech recognition research. The concept of sources of knowledge is introduced and the use of knowledge to generate and verify hypotheses is discussed. The difficulties that arise in the construction of different types of speech recognition systems are discussed and the structure and performance of several such systems is presented. Aspects of component subsystems at the acoustic, phonetic, syntactic, and semantic levels are presented. System organizations that are required for effective interaction and use of various component subsystems in the presence of error and ambiguity are discussed.", "Algebraic manipulation covers branches of software, particularly list processing, mathematics, notably logic and number theory, and applications largely in physics. The lectures will deal with all of these to a varying extent. The mathematical content will be kept to a minimum.", "Ambiguity is a pervasive and important aspect of natural language. Ambiguities, which are disambiguated by context, contribute powerfully to the expressiveness of natural language as compared to formal languages. In computational systems using natural language, problems of properly controlling ambiguity are particularly large, partially because of the necessity to circumvent parsings due to multiple orderings in the application of rules.Features, that is, subcategorizations of parts-of-speech, constitute an effective means for controlling syntactic ambiguity through ordering the hierarchical organization of syntactic constituents. This is the solution adopted for controlling ambiguity in REL English, which is part of the REL (Rapidly Extensible Language) System. REL is a total software system for facilitating man/machine communications. The efficiency of processing natural language in REL English is achieved both by the detailed syntactic aspects which are incorporated into the REL English grammar, and by means of the particular implementation for processing features in the parsing algorithm.", "A parsing algorithm which seems to be the most efficient general context-free algorithm known is described. It is similar to both Knuth's LR(<italic>k</italic>) algorithm and the familiar top-down algorithm. It has a time bound proportional to <italic>n</italic><supscrpt>3</supscrpt> (where <italic>n</italic> is the length of the string being parsed) in general; it has an <italic>n</italic><supscrpt>2</supscrpt> bound for unambiguous grammars; and it runs in linear time on a large class of grammars, which seems to include most practical context-free programming language grammars. In an empirical comparison it appears to be superior to the top-down and bottom-up algorithms studied by Griffiths and Petrick.", "A parsing algorithm which seems to be the most efficient general context-free algorithm known is described. It is similar to both Knuth's LR(k) algorithm and the familiar top-down algorithm. It has a time bound proportional to n3 (where n is the length of the string being parsed) in general; it has an n2 bound for unambiguous grammars; and it runs in linear time on a large class of grammars, which seems to include most practical context-free programming language grammars. In an empirical comparison it appears to be superior to the top-down and bottom-up algorithms studied by Griffiths and Petrick.", "This paper proposes a welcome hypothesis: a computationally simple device is sufficient for processing natural language. Traditionally it has been argued that processing natural language syntax requires very powerful machinery. Many engineers have come to this rather grim conclusion; almost all working parsers are actually Turing Machines (TM). For example, Woods specifically designed his Augmented Transition Networks (ATN''s) to be Turing Equivalent. If the problem is really as hard as it appears, then the only solution is to grin and bear it. Our own position is that parsing acceptable sentences is simpler because there are constraints on human performance that drastically reduce the computational requirements (time and space bounds). Although ideal linguistic competence is very complex, this observation may not apply directly to a real processing problem such as parsing. By including performance factors, it is possible to simplify the computation. We will propose two performance limitations, bounded memory and deterministic control, which have been incorporated in a new parser YAP.", "Since this classic work in phonology was published in 1968, there has been no other book that gives as broad a view of the subject, combining generally applicable theoretical contributions with analysis of the details of a single language. The theoretical issues raised in The Sound Pattern of English continue to be critical to current phonology, and in many instances the solutions proposed by Chomsky and Halle have yet to be improved upon.Noam Chomsky and Morris Halle are Institute Professors of Linguistics and Philosophy at MIT.", "The experiments reported in this paper are an attempt to explore the influence of certain physical cues on the perception of linguistic stress patterns. The material chosen was a group of English words in which a change of function from noun to verb is commonly associated with a shift of stress from the first to the second syllable. Spectrograms were used to determine the vowel duration and intensity ratios which occur in these words and this information was applied in making up a test in which listeners' judgments of stress could be correlated with variations in the duration and intensity ratios. The results of the experiments show that duration and intensity ratios are both cues for judgments of stress and that, in the material studied, duration ratio is a more effective cue than intensity ratio." ], "authors": [ { "name": [ "O. Fujimura" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "A. Joshi", "L. Levy" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "C. Cook" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Flanagan Denes", "Rabiner Fujimura", "Ritea Barnett", "Medress Lea" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "John ffitch" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Bozena Henisz-Dostert", "F. B. Thompson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Earley" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Earley" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Kenneth Ward Church" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Noam Chomsky", "M. Halle" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Fry" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "46841228", "1552091", "46002420", "27641570", "195926166", "6574474", "209398987", "35664", "63812413", "60457972", "120991482" ], "intents": [ [], [], [], [], [ "background" ], [ "background" ], [], [], [ "background" ], [], [] ], "isInfluential": [ false, false, false, false, false, false, false, false, false, false, false ] }
Problem: The paper aims to investigate the application of finite-state parsing techniques at the phonetic level to exploit contextual constraints and address allophonic variation in speech recognition systems. Solution: The hypothesis proposes that by extending the parsing framework to account for feature spreading, such as agreement and co-articulation, in a natural way, it will enhance lexical retrieval and recognition accuracy in speech processing systems.
500
0.02
null
null
null
null
null
null
null
null
1f40b0fe1b6b7663d3e1b701b508f135b03f30ef
5293716
null
{D}-Theory: Talking about Talking about Trees
Linguists, including computational linguists, have always been fond of talking about trees. In this paper, we outline a theory of linguistic structure which talks about talking about trees; we call this theory Description theory (D-theory). While important issues must be resolved before a complete picture of D-theory emerges (and also before we can build programs which utilize it), we believe that this theory will ultimately provide a framework for explaining the syntax and semantics of natural language in a manner which is intrinsically computational. This paper will focus primarily on one set of motivations for this theory, those engendered by attempts to handle certain syntactic phenomena within the framework of deterministic parsing. Jones has black hair', one can be sure that John is not Mr. Jones. But if one is told "John has red hair" and "Mr. Jones wears glasses" and nothing more about either John or Mr. Jones, then it is impossible to tell whether John is or is not Mr. Jones. In the domain of syntax, if a D-theory description says that and nothing else is stated about W, X, Y or Z, then it cannot be determined whether X and Z are aliases for the same NP node or are names for two distinct nodes, if an additional statement is added to the description that "Y dominates Z", then it must be the case that X and Z name distinct entities. We will show in what follows that the use of names has important ramifications for linguistic theory and the theory of parsing.
{ "name": [ "Marcus, Mitchell P. and", "Hindle, Donald and", "Fleck, Margaret M." ], "affiliation": [ null, null, null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
18
214
null
The key idea of D-theory is that a syntactic analysis of a sentence of English (or other natural language) consists of a description of its syntactic structure. Such a description contains information which differs from that contained in a standard tree structure in two crucial ways:1) The primitive predicate for indicating hierarchical structure in a D-theory description is "dominates" rather than "directly dominates". (A node A is said to dominate a node B if A is some ancestor of B; A is said to directly dominate B if A is the immediate parent of B.) A D-theory analysis thus expresses directly only what structures are contained (somewhere) within larger structures, but does indicate per se what the immediate constituents of any particular constituent are.A tree structure, on the other hand, encodes which nodes are directly dominated by other nodes in the analysis; it indicates directly the immediate constituents of each node. In a standard parse tree, the topmost S node might directly dominate exactly a Noun Phrase node, an Aux node and a Verb Phrase node; it is thus made up of three subparts: .that NP, that Aux, and that VP.2) A D-theory description uses names to make statements about entities, and does not contain the entities themselves. Furthermore, there is no distinguished set of names which are taken to be standard names or rigid designators; i.e. given only a name, one cannot tell what particular .syntactic entity it refers to. (This is the primary reason that we view D-theory representations as descriptions and not merely as directed acyclic graphs.)Because there are no standard names, if one is presented with two descriptions, each in terms of a different name, one can tell with certainty only if the two names refer to different entities, but never (for sure) if they refer to the same entity. In the latter case, there is always potential ambiguity. To take a commonplace example, given that "John has red hair" and "Mr.The structure of the rest of this paper is roughly as follows: We will first sketch the computational framework we build on, in essence that of [Marcus 80] , and explore briefly what a parser for this kind of grammar might look like; in appearance, its data structures and grammar will be Iittle different from that developed in [Berwick 82] . A series of syntactic phenomena will then be explored which resist elegant account within the earlier framework. For each phenomenon, we will present a simple Dtheoretic solution together with exposition of the relevant aspects of D-theory.One final introductory comment: That D-theory expresses syntactic structure in terms of dominance rather than direct dominance may be reminiscent of [Lasnik & Kupin 1977] (henceforth L-K), but our use of the dominance predicate differs fundamentally from the L-K formulation both in the primacy of the predicate to the theory, and in the theory of syntax implied.Theory der:ves domino.tion relations from their primary representation of linguistic structure, namely a set of strings of terminals and nonterminals with specified properties. D-theory structures are expressed directly in terms of dominance relations; the linear order of constituents is only directly expressed for items in the lexical string. Despite appearances, D-theory and the Lasnik-Kupin formalization are not interdefinable. We discuss the properties of the Lasnik-Kupin formalization at length in a forthcoming paper.[29 20 DeterminLqgic Tree-Building: The Old Theory D-theory grows out of earlier work on deterministic parsing as deterministic tree building (as in e.g. [Marcus 19801 , [Church 801 and [Berwick 82] ). The essence of that work is the hypothesis that natural language can be analyzed by some process which builds a syntactic analysis indelibly (borrowing a term from [McDonald 83]); i.e. that any structure built by the parser is part of the correct analysis of the input. Again, in the context of this earlier theory, the form of the indelible syntactic analysis was that of a tree.One key idea of this earlier tree-building theory that we retain is the notion that a natural language parser can buffer and examine some small number (e.g. up to three) unattached constituents before being forced to add to its existing structures. (In D-theory, the node named X is attached to Y if the parser's description of the existing structure includes a predication of the form "Y dominates X', or, as we will henceforth write, "D(Y,X)." X is unattached if the parser's description of the existing structure includes no predication of the form "D(Y, X)', for any name Y.) We thus assume that such a parser will have the two principle data structures of these earlier deterministic parsers, a stack and a buffer. However, the stack and the buffer in a D-theory parser will contain names rather than constituents, and these data structures will be augmented by a data base where the description of the syntactic structure itself is built up by the parser. (While this might sound novel, a moment's reflection on LISP implementation techniques should assure the reader that this structure is far less different from that of older parsers like Parsifal and Fidditch [Hindle 831 than it might sound.)As we shall see below, however, a parser which embodies Dtheory can recover (in some sense) from some of the constructions which would terminally confuse (or "garden path') a parser based on the deterministic tree-building theory. For D-theory to be psychologically valid, of course, it must be the case that just those constructions which do garden path a Dtheory parser garden path people as well. (We might note in passing that recent experimental paradigms which explore online syntactic processing using eye-tracking technology promise to provide delicate tests of these hypotheses, e.g. [Rayner & Frazier 831.) Another goal of this earlier work was to find some way of procedurally representing grammars of natural languages which is brief and perspicuous, and which allows (and perhaps even forces) grammatical generalizations to be stated in a natural way. As is often argued, such a representation must be embodied by our language understanding faculty, given that the grammar of a language is learned incrementally and quickly by children given only limited evidence. (To recast this point from an engineering point of view, this property is also a prerequisite to writing a grammar for a subset of some given natural language which remains extensible, so that new constructions can be added to the grammar without global changes, and so that these new constructions will interact robustly with the old grammar.)Following [Shipman 78] , as refined in [Berwick 82] . we assume that the grammar is organized into a set of context free rules, which we will call base templates, and a set of pattern-action rules. As in Parsifal, each pattern consists of up to four elements, each of which is a partial description of an element in the buffer, or the accessible node in the stack (the "current active node'). Loosely following [Berwick 82] , we assume that the action of each rule consists of exactly one of some small set of limited actions which might include the following:• Attach a node in the buffer to the current active node.• Switch the nodes in the first two buffer positions.• Insert a specified lexical item into a specified buffer slot.• Create a new current active node.• Insert an empty NP into the first buffer slot.(Where "attachment" is as defined above, and "create" means something like coin a new node name, and push it onto the active node stack.) Each rule is associated with some position in one of the base templates. So, for example, in figure 1 below, one base template is given, a highly simplified template for a sentence. Associated with the NP in the subject position of the sentence are several rules. The first rule says that if the first buffer position holds a name which is asserted to be an NP (informally: if there is an NP in the first buffer slot), then (informally) it is dominated by the S. The second says that if there is an auxiliary verb in the first slot followed by an NP, then switch them. And so on.Note that while a D-the0ry parser itself has no predicate with which to express direct dominance, the base templates explicitly encode just such information. Insofar as the parser makes its assertions of dominance on the basis of the phrase structure rules, the parser will behave very similarly to deterministic tree building parsers. In fact, the parser will typically (although, as we will see below, not always) behave in just such a fashion.S .> NP VP PP* {[NPI-> Attach} {[auxvl[NP]-> Switch} {[v, tenselessl -> lnsert(NP, 0)}We now turn from the consequences of expressing syntactic structure in terms of domination to the use of names within Dtheory. As stated above, it is this use of names which really makes D-theory analyses descriptions, and not merely directed acyclic graphs. The power of naming can be demonstrated most clearly by investigating some implications of the use of names for the representation of coordinate constructions, i.e. conjunction phenomena and the like.Coordinate constructions are infamous for being highly ambiguous given only syntactic constraints; standard techniques for parsing coordinate structures, e.g. [Woods 73] , are highly combinatoric, and it would seem inherent in the phenomenon that tree-building parsers must do extensive search to build all syntactically possible analyses. (See, e.g. the analysis of [Church & Patil 1982] .)One widely-used approach which eliminates much of this seemingly inherent search is to use extensive semantic and pragmatic interaction interleaved with the parsing process to quickly prune unpromising search paths. While Parsifal made use of exactly such interactions in other contexts, e.g. to correctly place prepositional phrases, such interactions seem to demand at least implicitly building syntactic structure which is discarded after some choice is made by higher-level cognitive components. Because this is counter to at least the spirit of the determinism hypothesis, it would be interesting if the syntactic analysis of coordinate structures could be made autonomous of higher-level processes.There are more central problems for a deterministic analysis of conjunction, however. Techniques which make use of the lookahead provided by buffering constituents can deterministically handle a perhaps surprising range of coordinate phenomena, as first demonstrated by the YAP parser [Church 80 ], but there appear to be fundamental limitations to what can be analyzed in this way. The central problem is that a tree building deterministic parser cannot examine the context necessary to determine what is conjoined to what without constructing nodes which may turn out to be spurious, given the (ultimate) correct analysis.In what follows, we will illustrate each of these problems in more detail and sketch an approach to the analysis of coordinate structures which we believe can be extended to handle such structures deterministically and without semantic interaction.Consider the problem of analyzing sentences like (11.1-2). These two sentences are identical at the level of preterminal symbols; they differ only in the particular lexical items chosen as nouns, with the schematic lexical structure indicated by (11.3). However, (11.1) has the favored reading that the apples, pears and cherries are all ripe and from local orchards, while in (11.2), only the cheese is ripe and only the cider is from local orchards. From this, it is clear that (11.1) is read as a conjunction of three nouns within one NP, while (11.2) is read as a conjunction of three individual NPs, with structures as indicated by (ll.Ia,2a). We assume here, crucially, that constituents in coordination are all attached to the same constituent; they can be thought of as "stacking" in a plane orthogonal to the standard referent, as [Chomsky 82] suggests. The conjunction itself is attached to the rightmost of the coordinate structures.(ll.1) They sell ripe apples, pears, and cherries from local orchards.(1 l.la) They sell [NP ripe [N apples] Thus, it would seem that to determine the level at which the structures are conjoined requires much pragmatic knowledge about fruit, flowers and the like.Note also that while (11.1-2) have particular primary readings, one needs to consider these sentences carefully to decide what the primary reading is. This is suggestive of the kind of syntactic vagueness that VanLehn argues characterizes many judgements of quantifier scope [VanLehn 78]. Note, however, that most evidence suggests that quantifier scope is not represented directly in syntactic structure, but is interpreted from that structure. For the readings of (11.1-2) to be vague in this way, the structures of (I l.la-2a) must be interpreted from syntactic structure, and not be part of it. It turns out that Dtheory, coupled with the assumption that the parser does not interact with semantic and pragmatic processing, provides an account which is consistent with these intuitions.But consider the D-theoretic analysis of (11.1); there are some surprises in store. Its representation will include predications like those of (12.1-8), where we are now careful to "unpack" informal names like "npl" to show that they consist of a content-free identifier and predications about the type of entity the identifier names. Here vpl is the name of a node whose head is "sell", apl an adjective phrase dominating "ripe", and ppl the PP "from local orchards." The analysis will also include predications about, the left-to-right order of the terminal string, which has been informally represented in (12.9); +X < Y" is to be read +X is the left of Y". We indicate the order of nonterminals here only for the sake of brevity; we use nl <n2as a shorthand forD(nl, 'cheese'); D(n2, 'bread'); 'cheese' < 'bread'.In particular, a D-theory analysis contains no explicit predications about left-right order of non-terminals.But given only the predications in (12), what can be said about the identities of the nodes named npl, np2, and np3? Under this description, the descriptions of npl, np2 and np3 are compatible descriptions; they are potentially descriptions of the same individual. They are all dominated by vpl, and each is an NP, so there is no conflict here, Each dominates a different noun, but several constituents of the same type can be dominated by the same node if they are in a coordinate structure (given the analysis of coordinate structures we assume) and if they are string adjacent. NI, n2 and n3 are string adjacent (given only (12)), so the fact that the nodes named npl, np2 and np3 dominate nouns which may turn out to be different does not make the descriptions of the NPs incompatible. (Indeed, if the nouns are viewed as a coordinate structure, then the structure of the nouns is the same as that of (11.1).) Furthermore, adjl is immediately to the left of and ppl is immediately to the right of all the nouns, so these constituents could be dominated by the same single NP that might dominate hi, n2 and n3 as well. Thus there is no information here that can distinguish npl from np2 from np3.The fact that the conjunction "and" is dominated by np3 does not block the above analysis. The addition of one domination predicate leaves it dominated by n3 (as well as np3, of course), thereby making n l, n2 and n3 a perfect coordinate structure, and leaving no barrier to npl, np2 and np3 being co-referent, But this means that the D-theory analysis of (11.1) has as standard referents both it and (11.2)! (This modifies our statement earlier in this paper about the uniqueness of the standard referent; we now must say that for each possible "stacking" of nodes, there is one standard referent.) For if npl, np2 and np3 corefer, then the analysis above shows that the structure described is exactly that of (11.2). There is also the possibility that just npl and np2 corefer, given the above analysis, which yields a reading where np2 is an appositive to npl, with npl and np3 coordinate structures (the structure of appositives is similar to that of coordinate structures, we assume); and the possibility that just np2 and np3 corefer, yielding a reading with npl and np2 coordinate structures, and np3 in apposition to np2. (The fact that we use a simplified phrase structure here is not an important fact. The analysis goes through equally as well with a full X-bar theoretic phrase component; the story is just much longer.)The upshot of this is that upon encountering constructions like (11), the parser can proceed by simply assuming that the structures are conjoined at the highest level possible, using different names for each of the potential highest level constituents. It can then analyze the (potentially) coordinate structures entirely independently of feedback from pragmatic and semantic knowledge sources. When higher cognitive processing of this description requires distinguishing at what level the structures are conjoined, pragmatics can be invoked where needed, but there need be no interaction with syntactic processes themselves. This is because, once again, it turns out if it is syntactically possible that structures should be conjoined at a lower level than that initially posited, the names of the potentially separate constituents simply can be viewed as aliases of the one node that does exist in the corresponding standard referent; in this case all predications about whatever node is named by the alias remain true, and thus once again no predications need to be revoked.We now see how it is that D-theory gives an account of the intuition that the fine structure of coordinations in vague, in the sense of VanLehn. For we have seen that pragmatics does not need to determine whether (e.g.) all the fruits in (11.1) are ripe or not for the syntactic analysis to be completed deterministically, exactly because the D-theory analysis leaves all (and, we also claim, only) the syntactically correct possibilities open. Thus the description given in (12) is appropriately vague between possible syntactic analyses of sentences like those schematized in (11.3). Thus, this new representation opens the way for a simple formal expression of the notion that some sentences may be vague in certain well defined ways, even though they are believed to be understood, and that this vagueness may not be resolved until a hearer's attention is called to the unresolved decision.7.3 The Problem of Nodes That Aren't There.While we can give only the briefest sketch here (the full story is quite long and complicated), exactly this use of names resolves yet another problem for the deterministic analysis of coordinate structures: To examine enough context (in the buffer) to decide what kind of structure is conjoined with what, a troe-building parser will often have to go out on a limb and posit the existence of nodes which may turn out not to exist after all. For example, if a tree-building parser has analyzed the inputs shown in (13.1-2) up to "worms" and has seen "and" and "frogs" in the (13.1) Birds eat small worms and frogs eat small flies. (13.2) Birds eat small worms and frogs.buffer, it will need to posit that "frogs" is a full NP to check to see if the pattern[conjunction] [NPI [verblis fulfilled, and thus if an S should be created with the NP as its head. But if the input is not as in (13.1), but as in (13.2), then positing the NP might be incorrect, because the correct analysis may be a noun-noun conjunction of "worms" and "frogs', (with the reading that birds eat worms and frogs, both of which are small).Of course, there is a second problem here for a tree-building parser, namely that (13.2) has a second reading which is an "NP and NP" conjunction. As we have seen above, there is no corresponding problem for a D-theory parser, because if it merely posits an NP dominating "frogs', the structure which will result for (13.2) is appropriately vague between both the NP reading and the noun reading of "frogs" (i.e. between the readings where the frogs are just plain frogs and where the frogs are small.)But the solution to the second problem for a D-theory parser is also a solution to the first! After seeing "and" and "frogs" in its buffer, a D-theory parser can simply posit an NP node dominating "frogs" and continue. If the input proceeds as in (13.1), then the parser will introduce an S node and assert that it dominates the new NP. This will make the descriptions of the NPs dominating "worms" and dominating "frogs" incompatible, i.e. this will assure that there really are two NPs in the standard referent. If the input proceeds as in (13.2), a D-theory parser will state that the node referred to by the new name is dominated by the previous VP, resulting in the structure described immediately above. To summarize, where a treebuilding parser might be misled into creating a node which might not exist at all, there is no corresponding problem for a D-theory parser.
By and large, we believe that a significant subset of the grammar of English has been successfully embedded within the deterministic tree-building model. However, a residue of syntactic phenomena remain which defy simple explication within this framework. Some of these phenomena are particular problems for the deterministic tree-building framework. Others, for example coordination and gapping phenomena, have defied adequate explication within any existing theory of grammar.In the remainder of this paper we will explore a range of such phenomena, and argue that D-theory provides a consistent approach which yields simple accounts for the range of phenomena we have considered to date. We will first argue for taking "dominates', not "directly dominates" as primitive, and then later argue why the use of names is justified. (Our view that this representation should be viewed as a description hangs on the use of names. In this section and in section 5 we argue only for a representation which is a particular kind of directed acyclic graph. Only with the arguments of section 7 is the position that this is a kind of description at all defensible.)One particularly interesting class of sentences which seems to defy deterministic accounts is exemplified by (2).(2) I drove my aunt from Peoria's car. Sentences like (2) contain a constituent which has a misleading *leading edge', an initial right-embedded subconstituent which could itself be the next constituent of whatever structure is being built at the next level up. For example, while analyzing (2), a parser which deterministically builds old-fashioned trees might just take "my aunt" to be the object of "drove', attaching it as the object of the VP, only to discover (too late) that this phrase functions instead as genitive determiner of the full NP "my aunt from Peoria's car'.In fact, the existing grammar for Parsifal causes exactly this behavior, and for good reason: This parser constructs NPs only up to the head noun before deciding on their role within the larger context; only after attaching an NP will Parsifal construct the post-modifiers of the NP and attach them, (This involves a mechanism called node reactivation; it is described in [Shipman & Marcus 79] .) One reason for this within the earlier framework is that, given a PP which immediately follows the head of an NP, it cannot be determined whether that PP should be attached to the preceding NP or to some constituent which dominates the NP until the role of that NP itself has been determined. In the specific case of (2), the parser will attach "my aunt" as the object of the verb "drove" so that it can decide where to attach the PP beginning with "from'. Only after it is too late will the parser see the genitive marker on "Peoria's" and boggle. While one could attempt to overcome this particular motivation for the two-stage parsing of NPs with some variant of the notion of pseudo-attachment (first used in [Church 801 ), this and related approaches have their problems too, as ChurchPotential pseudo-attachment solutions aside, the upshot is that sentences like (2) will cause deterministic tree building parsers to garden path. However, it is our strong intuition that such cases are not "garden paths'; we believe that such cases should be analyzed correctly by a deterministic parser rather than by the (putative) mechanism which recovers from garden paths.The D-theoretic solution to the problem of misleading "leading edges" hinges on one formal property of this problem: The initial analysis of this class of examples is incorrect only in that some constituent is attached in the parse tree at a higher point in the surrounding structure than is correct. Crucially, the parser neither creates structures of the wrong kind nor does it attach the structure that it builds to some structure which does not dominate it. In the misanalysis of (2), the parser initially errs only in attaching the NP "my aunt', which is indeed dominated by the VP whose head is "drove', too high in the structure.This class of examples is handled by D-theory without difficulty exactly because syntactic analyses are expressed in terms of domination rather than direct domination. The developing description of the structure of (2) in a D-theory parser at the point at which the parser had analyzed "my aunt', but no further, might include the following predications: vpl, vl) where the verb node named vl dominates "drove', and the NP node named npl dominates the lexical material "my aunt'.(3.1) D(vpl, npl) (3.2) D(Let us assume for the sake of simplicity that while building the PP "from Peoria's', the parser detects a genitive marker on the proper noun "Peoria's" and knows (magically, for now) that "Peoria's car" is not the correct analysis. Given this, the genitive must mark the entire NP "my aunt from Peoria" and thus "my aunt from Peoria" must serve not as the object of the verb "drove" but as the determiner of some larger NP which itself must be the object of "drove'. (Unless it is followed by a genitive marker, in which case....) The question we are centrally interested in here is not how the parser comes to the realization that it has erred, but rather what can be done to remedy the situation. (Actually how the parser must resolve "..L first problem is a complex and interesting story in and of itself, with the punchline being that exactly one (but only one) of (2) and 4 The description (3) is easy fixed, given that "D" is read "dominates', and not "directly dominates'. Several further predications can merely be added to (3), namely those of (5), which state that npl is dominated by a determiner node named detl, which itself is dominated by a new np node; np2, and that np2 is dominated by vpl.(5.1) D(npl, detl) (5.2) D(detl, np2) (5.3) D(np2, vpl)Adding these new predications does not make the predications of (3) false; it merely adds to them. The node named npl is still dominated by vpl as stated in (3.1), because the relation "D" is transitive. Given the predications in (5), (3.1) is redundant, but it is not false.The general point is this: D-theory allows nodes to be attached initially by a parser to some point which will turn out to be higher than its lowest point of attachment (for the more general sense of attachment defined above) without such initial states causing the parser to garden path. Because of the nature of "D'. the parser can in this sense "lower" a constituent without falsifying a previous predication. The earlier predication remains indelible.
null
But how can such a list of domination predications be interpreted? It would seem that compositional semantics must depend upon being able to determine exactly what the immediate constituents of any given structure are: if the meaning of a phrase determined from the meanings of its parts, then it must be determined exactly what its parts are.We assume that semantic interpretation of a D-theory analysis is done by taking such an analysis as describing the minimal tree possible, i.e. by taking "D" to mean directly dominates wherever possible but only for semantic analysis. For example.if the analysis of a structure includes the predications that X dominates Y, Y dominates Z and X also dominates Z, then the semantic interpreter will assume that X directly dominates Y and that Y directly dominates Z. We will call such an interpretation of a D-theoretic analysis the standard referent of the analysis. (We further assume that the description produced by a D-theory parser will have at each stage of the analysis one and only one standard referent, and the complex situation where two or more chains of domination must be merged to arrive at a single standard referent will not arise in the operation of a Dtheory parser. Substantiation of these assumptions awaits the construction of a parser and a sizable grammar.)This notion of "standard referent" means that adding predications to the (partial) analysis of a sentence may very well change the standard referent of that analysis as viewed by the semantic interpreter. The key idea here is that from the point of view of semantics, the structure built by the parser may appear to change, but from the parser's point of view, the description remains indelible.The situation we describe is not far from that which occurs as the usual case in the communication of descriptions of objects between individuals. Suppose Don says to you, standing before you wearing a brown tweed jacket, "My coat is too warm". The phrase "my coat" can refer to any coat that Don owns, yet you will undoubtedly take the phrase to refer to the brown tweed jacket. Given that descriptions are always necessarily partial, there must always be a conventional standard referent for a description. But now suppose that Don says "My blue coat is too warm'. He merely adds "blue" to the phrase "my coat", but the set of possible referents changes, and in fact shrinks. More to the point, you will now take the referent of the phrase "my blue coat" to mean some blue coat or other which Don owns; i.e. adding to the description changes the standard referent.The key notion here is that because descriptions are always underspecified, there must be some set of conventions for choosing the intended single referent out of the often large (and sometimes infinite) class of objects that any given description is true of. Thus, once we claim that the output of syntactic analysis is a description, it is not surprising that there must be some restrictive conventions to determine exactly what such a description refers to. Given this, the convention we assume seems a simple and natural one.Another problematic class of constructions for deterministic tree-building theories are those for which it is argued that some kind of active reanalysis process must occur. For each of these constructions, there is linguistic evidence (of varied force) which suggests (recast in processing terms) that different syntactic structures must be assigned to that construction at different points during grammatical processing. In other words, it can be demonstrated that each of these constructions has properties which provide evidence for one particular structure at one stage of processing, while displaying properties which argue for a quite different structure at a later stage of processing. But if this reanalysis account is the correct account for any of these constructions, then the deterministic tree building theory must be wrong somewhere, for changing a structural analysis is the one thing that indelible systems cannot do, ex hypothesL One class of examples widely assumed to involve some kind of reanatysis is the class of verb complement structures which have so-called "pseudo-passives". These verbs seem to have two passive forms, one of which has an NP in subject position which serves in the same role as that served by the seeming object of the active form, while the other passive form seems to have an underlying prepositional object in subject position. For example, there are two passives which correspond to the active sentence (6.1), a "normal" passive (6.3), and a passive which seems to pull the object of "of" into subject position, namely, (6.2).(6.1) Past owners had made a mess of the house. (6.2) The house had been made a mess of. (6.3) A mess had been made of the house.One fairly common view is that the phrase "made a mess of. functions as a single idiomatic verb, so that "the house" in (6.1) and (6. 2) can be simply viewed as the object of the verb "made a mess of.. But then to account for (6.3), it must be assumed that "made" is first treated as a normal verb with "a mess" as object. This means that either (6.3) has a different underlying syntactic structure than (6.1-2), or that the syntactic analysis assigned to the string "made of" (or perhaps "made <trace> of') changes after the passive is accounted for. To get a consistent syntactic analysis for these sentences, one can argue either that reanalysis always or never takes place. The position that we find most tenable, given the evidence, is that reanalysis sometimes takes place. (Of course, the fact that purely lexical accounts (see, e.g. [Bresnan 82] ) seem plausible leaves the older tree-building theories on not entirely untenable ground.) But how can any reanalysis at all be reconciled with the determinism hypothesis?Consider the analysis that a D-theory parser will have built up after having parsed "made a mess', but before noticing "of'. At this point the parser should assign the sentence a non-idiomatic reading, with "a mess" the real object of "made". Some of the predications in the analysis will be(7.1) D(vpl, vl) (7,2) D(vpl, npl)where vpl is a vp node dominating "made" and npl is an np node dominating "a mess ~. (Note that'in (8.1) The children made a mess, but then cleaned it up."it" refers to a mess, but that one cannot say (8.2) *The children made a mess of their bedrooms, but then cleaned it up.which seems to indicate that the phrase "a mess" is opaque to anaphoric reference in the idiomatic reading, and that therefore (8.1) is not idiomatic in the same sense.)We assume here that the preposition "of" is lexically marked for the idiomatic verb "make a mess', i.e. it is lexically specified for the idiom, but it is not itself a part of the idiom. Evidence for this includes sentences like (9), in which the preposition cannot be reanalyzed into the verb, given D-theory, as we will see below.(9) Of what did the children make a mess'?From a parsing point of view, this means that the presence of the preposition "of. will serve as a trigger to the reanalysis of "make a mess", without being part of the reanalysed material itself. (Thanks to Chris Halverson for pointing out a problem caused by (9) for an earlier analysis.)Returning to the analysis of (6.1), the preposition "of" triggers exactly such a reanalysis. Given D-theory, this can be effected simply by adding the additional predication (10) to (7.1-2) above:(10) D(vl, npl)Given this new predication, the standard referent of the description now has npl directly dominated by vl, i.e. it is now part of the verb. And now when "a house" is noticed by the parser, it will be attached as the first NP after the verb vl, i.e. as its object. Once again, the predications (7.1-2) are not falsified by the additional predication; they remain indelibly true -npl remains dominated by vpl, although no longer directly dominated by it. But, to repeat the point, the parser is (blissfully) unaware of this notion; the standard referent is a notion meaningful only to semantics.The analysis of (6.2) proceeds as follows: After parsing "made" as a verb and "a mess" as its object and noticing the trigger "of" sitting in the buffer, the parser will add an extra predication effecting just the same "reanalysis" as was done for (6.1). We assume that the passive rule inserts a trace either immediately after a verb, or after the preposition immediately following a verb, if that preposition is lexically specified for that verb. We will not argue for this analysis here; suffice it to say that this analysis is motivated by facts which also motivate recent somewhat similar analyses of passive, e.g. [Hornstein and Weinberg 811 and [Bresnan 82] . Given this analysis, the parser will now drop a passive trace for the subject "the house" into the buffer after the lexically specified preposition "of", and the parse will then move to completion. (One issue that remains open, though, is exactly how the parser knows not to drop the passive trace after "made'. The solution to this particular problem must interact correctly with many such control problems involving passive. Resolving this entire set of issues in a consistent fashion awaits the pending implementation of a parser to serve as a tool in the investigation of these control issues.)How is (6.3) parsed? Here we assume that the parser will drop a passive trace after the verb "made'. Because we assume that the parser cannot access the binding of the trace, and therefore cannot access the lexical material "a mess', it must be the case that reanalysis will not take place in this case. While this asymmetry may seem unpleasant, we note that there is no evidence that syntactic reanatysis has taken place here. Instead,. we assume that semantic processing will simply add an additional domination predicate after it notices the binding of the passive trace. Thus, the reanalysis here is semantic, not syntactic. (Note that there are other cases, e.g. right dislocation, where it is clear that additional domination predicates are added by post-syntactic processes. We believe that semantics can add domination predicates, but cannot construct new nodes.)As an example of the kind of operation that is ruled out by Dtheory, let us return to our assertion above that the preposition "of" cannot always be part of the idiomatic verb "make a mess'. Consider (9) above. In this sentence, the analysis will include some assertions that "of" is dominated by a PP, which itself is dominated by COMP. But if an assertion is then added to this description asserting that "of" is also dominated by a verb node, then there is no consistent interpretation of this structure at all, since the COMP cannot dominate the verb node and the verb node cannot dominate the COMP. Put more simply, there is no way something can merely be "lowered" from a COMP node into the verb.Another possibility similarly ruled out by D-theory is that in sentences like (6.1) there is initially a PP node which dominates both "of" and the NP "the house", but that "of" is reanalyzed into the idiomatic verb. For "of" to be dominated by a verb node, given that it is already dominated by the PP node, either the PP node must be dominated by the verb or the verb by the PP node, if the dominance relations are to be consistent. But it makes no sense for the PP node to have a standard referent where it immediately dominates only a verb and an NP, but no preposition. And if the verb dominates the PP, then the verb also dominates the NP which serves as the object of the VP, which is impossible.In this sense, D-theory is clearly more restrictive than the theory of [Lasnik and Kupin 771 , at least as interpreted by [Chomsky 81 ] , where reanalysis is done by adding an additional monostring to the existing Restricted Phrase Marker and eliminating others.In this case, the dominationrelations implied by the new analysis need not be consistent with those implicit in the prere, analysis RPM.While we will not discuss this issue here at length, our current account of D-theory includes a set of stipulated constro;-'-'hat further restrict where new domination predications can be added to a description. These constraints include the following: The Rightmost Daughter Constraint, that only the rightmost daughter of a node can be lowered under a sibling node at any given point in the parsing process; and The No Crossover Constraint, that no node can be lowered under a sibling which is not contiguous to it, and some others.As viewed from the point of view of the standard referent, we believe that a D-theory parser will appear to operate, by and large, just like a tree building deterministic parser, until it creates some structure whose standard referent must be changed. From the parser's point of view, it will scan base templates left-to-right for the most part, initiating some in a top-down manner, some in a bottom-up manner, until it finds itself unable to fill the next template slot somehow or other. At this point some mechanism must decide what additional predications to add to allow the parser to proceed. The functional force of the stipulations discussed above is to sevelely restrict the range of possibilities that can be considered in such a situation. Indeed, we would be delighted if it turned out to be the case that the parser can never consider more than several possibilities at any point that such an operation will be performed.It is particularly worthy of note that these two constraints interact to predict that the range of constructions that can be reanalyzed in the manner discussed in the last section is severely circumscribed, and that this prediction is borne out (see {Quirk, Greenbaum, Leech & Svartvik 72], §12.64).These two constraints together predict that verb reanalysis is possible only when a single constituent precedes the trigger for reanalysis:Suppose that there were two constituents which preceded the trigger for reanalysis, i.e. that the order of constituents in the VP iswhere C1 and C2 are the two constituents, and T is the trigger. Then these two constituents would be attached to the VP whose head is V before T is encountered, causing the parser (before attaching T) to assert two new predications which would have the force of shifting the two constituents into the verb. But which predication could be parser add first? If it asserts that D(V, CI), this violates the Rightmost Daughter Constraint, because only C2 can be lowered under a sibling. But if the parser first asserts D(V, C2) then C2 crosses over CI, which is prohibited by the No Crossover Constraint. Therefore, only constituent can have been attached before the reanalysis occurs.
This paper has described a new theory of natural language syntax and parsing which argues that the proper output of syntactic analysis is not a tree structure per se, but rather a description of such structures. Rather than constructing a tree, a natural language parser based on these ideas will construct a single description which can be viewed as a partial description of each of a family of trees.The two key ideas that we have presented here arc:(1) An analysis of a syntactic structure consists primarily of predications of the form "node X dominates node Y', and not the more traditional "node. X immediately dominates node Y'; syntactic analysis never says more than that node X is somewhere above node Y.(2) Because this is a description, two names used to refer to syntactic structures can always co-refer if their descriptions are compatible, and furthermore, it is impossible to block the possibility of coreferenec if the descriptions are compatible.These two ideas, taken together, imply that during the process of analyzing the structure of a given utterance, merely adding to the emerging description may change the set of trees ultimately described (just as adding "honest" to the phrase "all politicians" may radically change the set described). We have also sketched some implications of this theory that not only suggest a new analysis of coordinate structures, but also suggest that coordinate structures might be much easier to analyze than current parsing techniques would suggest.We are currently working to flesh out the analyses presented above. We arc also working on an analysis of gapping and elision phenomena which seems to fall naturally out of this framework. This new analysis is surprising in that it makes crucially use of descriptions even less fully specified than those we have discussed in this paper, by using the notations we have introduced here to fuller advantage. These emerging analyses move yet further away from the traditional view of either trees or phrase markers as an appropriate framework for expressing syntactic generalizations.
Main paper: d-theory: an introduction: The key idea of D-theory is that a syntactic analysis of a sentence of English (or other natural language) consists of a description of its syntactic structure. Such a description contains information which differs from that contained in a standard tree structure in two crucial ways:1) The primitive predicate for indicating hierarchical structure in a D-theory description is "dominates" rather than "directly dominates". (A node A is said to dominate a node B if A is some ancestor of B; A is said to directly dominate B if A is the immediate parent of B.) A D-theory analysis thus expresses directly only what structures are contained (somewhere) within larger structures, but does indicate per se what the immediate constituents of any particular constituent are.A tree structure, on the other hand, encodes which nodes are directly dominated by other nodes in the analysis; it indicates directly the immediate constituents of each node. In a standard parse tree, the topmost S node might directly dominate exactly a Noun Phrase node, an Aux node and a Verb Phrase node; it is thus made up of three subparts: .that NP, that Aux, and that VP.2) A D-theory description uses names to make statements about entities, and does not contain the entities themselves. Furthermore, there is no distinguished set of names which are taken to be standard names or rigid designators; i.e. given only a name, one cannot tell what particular .syntactic entity it refers to. (This is the primary reason that we view D-theory representations as descriptions and not merely as directed acyclic graphs.)Because there are no standard names, if one is presented with two descriptions, each in terms of a different name, one can tell with certainty only if the two names refer to different entities, but never (for sure) if they refer to the same entity. In the latter case, there is always potential ambiguity. To take a commonplace example, given that "John has red hair" and "Mr.The structure of the rest of this paper is roughly as follows: We will first sketch the computational framework we build on, in essence that of [Marcus 80] , and explore briefly what a parser for this kind of grammar might look like; in appearance, its data structures and grammar will be Iittle different from that developed in [Berwick 82] . A series of syntactic phenomena will then be explored which resist elegant account within the earlier framework. For each phenomenon, we will present a simple Dtheoretic solution together with exposition of the relevant aspects of D-theory.One final introductory comment: That D-theory expresses syntactic structure in terms of dominance rather than direct dominance may be reminiscent of [Lasnik & Kupin 1977] (henceforth L-K), but our use of the dominance predicate differs fundamentally from the L-K formulation both in the primacy of the predicate to the theory, and in the theory of syntax implied.Theory der:ves domino.tion relations from their primary representation of linguistic structure, namely a set of strings of terminals and nonterminals with specified properties. D-theory structures are expressed directly in terms of dominance relations; the linear order of constituents is only directly expressed for items in the lexical string. Despite appearances, D-theory and the Lasnik-Kupin formalization are not interdefinable. We discuss the properties of the Lasnik-Kupin formalization at length in a forthcoming paper.[29 20 DeterminLqgic Tree-Building: The Old Theory D-theory grows out of earlier work on deterministic parsing as deterministic tree building (as in e.g. [Marcus 19801 , [Church 801 and [Berwick 82] ). The essence of that work is the hypothesis that natural language can be analyzed by some process which builds a syntactic analysis indelibly (borrowing a term from [McDonald 83]); i.e. that any structure built by the parser is part of the correct analysis of the input. Again, in the context of this earlier theory, the form of the indelible syntactic analysis was that of a tree.One key idea of this earlier tree-building theory that we retain is the notion that a natural language parser can buffer and examine some small number (e.g. up to three) unattached constituents before being forced to add to its existing structures. (In D-theory, the node named X is attached to Y if the parser's description of the existing structure includes a predication of the form "Y dominates X', or, as we will henceforth write, "D(Y,X)." X is unattached if the parser's description of the existing structure includes no predication of the form "D(Y, X)', for any name Y.) We thus assume that such a parser will have the two principle data structures of these earlier deterministic parsers, a stack and a buffer. However, the stack and the buffer in a D-theory parser will contain names rather than constituents, and these data structures will be augmented by a data base where the description of the syntactic structure itself is built up by the parser. (While this might sound novel, a moment's reflection on LISP implementation techniques should assure the reader that this structure is far less different from that of older parsers like Parsifal and Fidditch [Hindle 831 than it might sound.)As we shall see below, however, a parser which embodies Dtheory can recover (in some sense) from some of the constructions which would terminally confuse (or "garden path') a parser based on the deterministic tree-building theory. For D-theory to be psychologically valid, of course, it must be the case that just those constructions which do garden path a Dtheory parser garden path people as well. (We might note in passing that recent experimental paradigms which explore online syntactic processing using eye-tracking technology promise to provide delicate tests of these hypotheses, e.g. [Rayner & Frazier 831.) Another goal of this earlier work was to find some way of procedurally representing grammars of natural languages which is brief and perspicuous, and which allows (and perhaps even forces) grammatical generalizations to be stated in a natural way. As is often argued, such a representation must be embodied by our language understanding faculty, given that the grammar of a language is learned incrementally and quickly by children given only limited evidence. (To recast this point from an engineering point of view, this property is also a prerequisite to writing a grammar for a subset of some given natural language which remains extensible, so that new constructions can be added to the grammar without global changes, and so that these new constructions will interact robustly with the old grammar.)Following [Shipman 78] , as refined in [Berwick 82] . we assume that the grammar is organized into a set of context free rules, which we will call base templates, and a set of pattern-action rules. As in Parsifal, each pattern consists of up to four elements, each of which is a partial description of an element in the buffer, or the accessible node in the stack (the "current active node'). Loosely following [Berwick 82] , we assume that the action of each rule consists of exactly one of some small set of limited actions which might include the following:• Attach a node in the buffer to the current active node.• Switch the nodes in the first two buffer positions.• Insert a specified lexical item into a specified buffer slot.• Create a new current active node.• Insert an empty NP into the first buffer slot.(Where "attachment" is as defined above, and "create" means something like coin a new node name, and push it onto the active node stack.) Each rule is associated with some position in one of the base templates. So, for example, in figure 1 below, one base template is given, a highly simplified template for a sentence. Associated with the NP in the subject position of the sentence are several rules. The first rule says that if the first buffer position holds a name which is asserted to be an NP (informally: if there is an NP in the first buffer slot), then (informally) it is dominated by the S. The second says that if there is an auxiliary verb in the first slot followed by an NP, then switch them. And so on.Note that while a D-the0ry parser itself has no predicate with which to express direct dominance, the base templates explicitly encode just such information. Insofar as the parser makes its assertions of dominance on the basis of the phrase structure rules, the parser will behave very similarly to deterministic tree building parsers. In fact, the parser will typically (although, as we will see below, not always) behave in just such a fashion.S .> NP VP PP* {[NPI-> Attach} {[auxvl[NP]-> Switch} {[v, tenselessl -> lnsert(NP, 0)} the problem of misleading leading edges: By and large, we believe that a significant subset of the grammar of English has been successfully embedded within the deterministic tree-building model. However, a residue of syntactic phenomena remain which defy simple explication within this framework. Some of these phenomena are particular problems for the deterministic tree-building framework. Others, for example coordination and gapping phenomena, have defied adequate explication within any existing theory of grammar.In the remainder of this paper we will explore a range of such phenomena, and argue that D-theory provides a consistent approach which yields simple accounts for the range of phenomena we have considered to date. We will first argue for taking "dominates', not "directly dominates" as primitive, and then later argue why the use of names is justified. (Our view that this representation should be viewed as a description hangs on the use of names. In this section and in section 5 we argue only for a representation which is a particular kind of directed acyclic graph. Only with the arguments of section 7 is the position that this is a kind of description at all defensible.)One particularly interesting class of sentences which seems to defy deterministic accounts is exemplified by (2).(2) I drove my aunt from Peoria's car. Sentences like (2) contain a constituent which has a misleading *leading edge', an initial right-embedded subconstituent which could itself be the next constituent of whatever structure is being built at the next level up. For example, while analyzing (2), a parser which deterministically builds old-fashioned trees might just take "my aunt" to be the object of "drove', attaching it as the object of the VP, only to discover (too late) that this phrase functions instead as genitive determiner of the full NP "my aunt from Peoria's car'.In fact, the existing grammar for Parsifal causes exactly this behavior, and for good reason: This parser constructs NPs only up to the head noun before deciding on their role within the larger context; only after attaching an NP will Parsifal construct the post-modifiers of the NP and attach them, (This involves a mechanism called node reactivation; it is described in [Shipman & Marcus 79] .) One reason for this within the earlier framework is that, given a PP which immediately follows the head of an NP, it cannot be determined whether that PP should be attached to the preceding NP or to some constituent which dominates the NP until the role of that NP itself has been determined. In the specific case of (2), the parser will attach "my aunt" as the object of the verb "drove" so that it can decide where to attach the PP beginning with "from'. Only after it is too late will the parser see the genitive marker on "Peoria's" and boggle. While one could attempt to overcome this particular motivation for the two-stage parsing of NPs with some variant of the notion of pseudo-attachment (first used in [Church 801 ), this and related approaches have their problems too, as ChurchPotential pseudo-attachment solutions aside, the upshot is that sentences like (2) will cause deterministic tree building parsers to garden path. However, it is our strong intuition that such cases are not "garden paths'; we believe that such cases should be analyzed correctly by a deterministic parser rather than by the (putative) mechanism which recovers from garden paths.The D-theoretic solution to the problem of misleading "leading edges" hinges on one formal property of this problem: The initial analysis of this class of examples is incorrect only in that some constituent is attached in the parse tree at a higher point in the surrounding structure than is correct. Crucially, the parser neither creates structures of the wrong kind nor does it attach the structure that it builds to some structure which does not dominate it. In the misanalysis of (2), the parser initially errs only in attaching the NP "my aunt', which is indeed dominated by the VP whose head is "drove', too high in the structure.This class of examples is handled by D-theory without difficulty exactly because syntactic analyses are expressed in terms of domination rather than direct domination. The developing description of the structure of (2) in a D-theory parser at the point at which the parser had analyzed "my aunt', but no further, might include the following predications: vpl, vl) where the verb node named vl dominates "drove', and the NP node named npl dominates the lexical material "my aunt'.(3.1) D(vpl, npl) (3.2) D(Let us assume for the sake of simplicity that while building the PP "from Peoria's', the parser detects a genitive marker on the proper noun "Peoria's" and knows (magically, for now) that "Peoria's car" is not the correct analysis. Given this, the genitive must mark the entire NP "my aunt from Peoria" and thus "my aunt from Peoria" must serve not as the object of the verb "drove" but as the determiner of some larger NP which itself must be the object of "drove'. (Unless it is followed by a genitive marker, in which case....) The question we are centrally interested in here is not how the parser comes to the realization that it has erred, but rather what can be done to remedy the situation. (Actually how the parser must resolve "..L first problem is a complex and interesting story in and of itself, with the punchline being that exactly one (but only one) of (2) and 4 The description (3) is easy fixed, given that "D" is read "dominates', and not "directly dominates'. Several further predications can merely be added to (3), namely those of (5), which state that npl is dominated by a determiner node named detl, which itself is dominated by a new np node; np2, and that np2 is dominated by vpl.(5.1) D(npl, detl) (5.2) D(detl, np2) (5.3) D(np2, vpl)Adding these new predications does not make the predications of (3) false; it merely adds to them. The node named npl is still dominated by vpl as stated in (3.1), because the relation "D" is transitive. Given the predications in (5), (3.1) is redundant, but it is not false.The general point is this: D-theory allows nodes to be attached initially by a parser to some point which will turn out to be higher than its lowest point of attachment (for the more general sense of attachment defined above) without such initial states causing the parser to garden path. Because of the nature of "D'. the parser can in this sense "lower" a constituent without falsifying a previous predication. The earlier predication remains indelible. semantic interpretation: the standard referent: But how can such a list of domination predications be interpreted? It would seem that compositional semantics must depend upon being able to determine exactly what the immediate constituents of any given structure are: if the meaning of a phrase determined from the meanings of its parts, then it must be determined exactly what its parts are.We assume that semantic interpretation of a D-theory analysis is done by taking such an analysis as describing the minimal tree possible, i.e. by taking "D" to mean directly dominates wherever possible but only for semantic analysis. For example.if the analysis of a structure includes the predications that X dominates Y, Y dominates Z and X also dominates Z, then the semantic interpreter will assume that X directly dominates Y and that Y directly dominates Z. We will call such an interpretation of a D-theoretic analysis the standard referent of the analysis. (We further assume that the description produced by a D-theory parser will have at each stage of the analysis one and only one standard referent, and the complex situation where two or more chains of domination must be merged to arrive at a single standard referent will not arise in the operation of a Dtheory parser. Substantiation of these assumptions awaits the construction of a parser and a sizable grammar.)This notion of "standard referent" means that adding predications to the (partial) analysis of a sentence may very well change the standard referent of that analysis as viewed by the semantic interpreter. The key idea here is that from the point of view of semantics, the structure built by the parser may appear to change, but from the parser's point of view, the description remains indelible.The situation we describe is not far from that which occurs as the usual case in the communication of descriptions of objects between individuals. Suppose Don says to you, standing before you wearing a brown tweed jacket, "My coat is too warm". The phrase "my coat" can refer to any coat that Don owns, yet you will undoubtedly take the phrase to refer to the brown tweed jacket. Given that descriptions are always necessarily partial, there must always be a conventional standard referent for a description. But now suppose that Don says "My blue coat is too warm'. He merely adds "blue" to the phrase "my coat", but the set of possible referents changes, and in fact shrinks. More to the point, you will now take the referent of the phrase "my blue coat" to mean some blue coat or other which Don owns; i.e. adding to the description changes the standard referent.The key notion here is that because descriptions are always underspecified, there must be some set of conventions for choosing the intended single referent out of the often large (and sometimes infinite) class of objects that any given description is true of. Thus, once we claim that the output of syntactic analysis is a description, it is not surprising that there must be some restrictive conventions to determine exactly what such a description refers to. Given this, the convention we assume seems a simple and natural one. on the re.analysis of indelible strucmre~: Another problematic class of constructions for deterministic tree-building theories are those for which it is argued that some kind of active reanalysis process must occur. For each of these constructions, there is linguistic evidence (of varied force) which suggests (recast in processing terms) that different syntactic structures must be assigned to that construction at different points during grammatical processing. In other words, it can be demonstrated that each of these constructions has properties which provide evidence for one particular structure at one stage of processing, while displaying properties which argue for a quite different structure at a later stage of processing. But if this reanalysis account is the correct account for any of these constructions, then the deterministic tree building theory must be wrong somewhere, for changing a structural analysis is the one thing that indelible systems cannot do, ex hypothesL One class of examples widely assumed to involve some kind of reanatysis is the class of verb complement structures which have so-called "pseudo-passives". These verbs seem to have two passive forms, one of which has an NP in subject position which serves in the same role as that served by the seeming object of the active form, while the other passive form seems to have an underlying prepositional object in subject position. For example, there are two passives which correspond to the active sentence (6.1), a "normal" passive (6.3), and a passive which seems to pull the object of "of" into subject position, namely, (6.2).(6.1) Past owners had made a mess of the house. (6.2) The house had been made a mess of. (6.3) A mess had been made of the house.One fairly common view is that the phrase "made a mess of. functions as a single idiomatic verb, so that "the house" in (6.1) and (6. 2) can be simply viewed as the object of the verb "made a mess of.. But then to account for (6.3), it must be assumed that "made" is first treated as a normal verb with "a mess" as object. This means that either (6.3) has a different underlying syntactic structure than (6.1-2), or that the syntactic analysis assigned to the string "made of" (or perhaps "made <trace> of') changes after the passive is accounted for. To get a consistent syntactic analysis for these sentences, one can argue either that reanalysis always or never takes place. The position that we find most tenable, given the evidence, is that reanalysis sometimes takes place. (Of course, the fact that purely lexical accounts (see, e.g. [Bresnan 82] ) seem plausible leaves the older tree-building theories on not entirely untenable ground.) But how can any reanalysis at all be reconciled with the determinism hypothesis?Consider the analysis that a D-theory parser will have built up after having parsed "made a mess', but before noticing "of'. At this point the parser should assign the sentence a non-idiomatic reading, with "a mess" the real object of "made". Some of the predications in the analysis will be(7.1) D(vpl, vl) (7,2) D(vpl, npl)where vpl is a vp node dominating "made" and npl is an np node dominating "a mess ~. (Note that'in (8.1) The children made a mess, but then cleaned it up."it" refers to a mess, but that one cannot say (8.2) *The children made a mess of their bedrooms, but then cleaned it up.which seems to indicate that the phrase "a mess" is opaque to anaphoric reference in the idiomatic reading, and that therefore (8.1) is not idiomatic in the same sense.)We assume here that the preposition "of" is lexically marked for the idiomatic verb "make a mess', i.e. it is lexically specified for the idiom, but it is not itself a part of the idiom. Evidence for this includes sentences like (9), in which the preposition cannot be reanalyzed into the verb, given D-theory, as we will see below.(9) Of what did the children make a mess'?From a parsing point of view, this means that the presence of the preposition "of. will serve as a trigger to the reanalysis of "make a mess", without being part of the reanalysed material itself. (Thanks to Chris Halverson for pointing out a problem caused by (9) for an earlier analysis.)Returning to the analysis of (6.1), the preposition "of" triggers exactly such a reanalysis. Given D-theory, this can be effected simply by adding the additional predication (10) to (7.1-2) above:(10) D(vl, npl)Given this new predication, the standard referent of the description now has npl directly dominated by vl, i.e. it is now part of the verb. And now when "a house" is noticed by the parser, it will be attached as the first NP after the verb vl, i.e. as its object. Once again, the predications (7.1-2) are not falsified by the additional predication; they remain indelibly true -npl remains dominated by vpl, although no longer directly dominated by it. But, to repeat the point, the parser is (blissfully) unaware of this notion; the standard referent is a notion meaningful only to semantics.The analysis of (6.2) proceeds as follows: After parsing "made" as a verb and "a mess" as its object and noticing the trigger "of" sitting in the buffer, the parser will add an extra predication effecting just the same "reanalysis" as was done for (6.1). We assume that the passive rule inserts a trace either immediately after a verb, or after the preposition immediately following a verb, if that preposition is lexically specified for that verb. We will not argue for this analysis here; suffice it to say that this analysis is motivated by facts which also motivate recent somewhat similar analyses of passive, e.g. [Hornstein and Weinberg 811 and [Bresnan 82] . Given this analysis, the parser will now drop a passive trace for the subject "the house" into the buffer after the lexically specified preposition "of", and the parse will then move to completion. (One issue that remains open, though, is exactly how the parser knows not to drop the passive trace after "made'. The solution to this particular problem must interact correctly with many such control problems involving passive. Resolving this entire set of issues in a consistent fashion awaits the pending implementation of a parser to serve as a tool in the investigation of these control issues.)How is (6.3) parsed? Here we assume that the parser will drop a passive trace after the verb "made'. Because we assume that the parser cannot access the binding of the trace, and therefore cannot access the lexical material "a mess', it must be the case that reanalysis will not take place in this case. While this asymmetry may seem unpleasant, we note that there is no evidence that syntactic reanatysis has taken place here. Instead,. we assume that semantic processing will simply add an additional domination predicate after it notices the binding of the passive trace. Thus, the reanalysis here is semantic, not syntactic. (Note that there are other cases, e.g. right dislocation, where it is clear that additional domination predicates are added by post-syntactic processes. We believe that semantics can add domination predicates, but cannot construct new nodes.)As an example of the kind of operation that is ruled out by Dtheory, let us return to our assertion above that the preposition "of" cannot always be part of the idiomatic verb "make a mess'. Consider (9) above. In this sentence, the analysis will include some assertions that "of" is dominated by a PP, which itself is dominated by COMP. But if an assertion is then added to this description asserting that "of" is also dominated by a verb node, then there is no consistent interpretation of this structure at all, since the COMP cannot dominate the verb node and the verb node cannot dominate the COMP. Put more simply, there is no way something can merely be "lowered" from a COMP node into the verb.Another possibility similarly ruled out by D-theory is that in sentences like (6.1) there is initially a PP node which dominates both "of" and the NP "the house", but that "of" is reanalyzed into the idiomatic verb. For "of" to be dominated by a verb node, given that it is already dominated by the PP node, either the PP node must be dominated by the verb or the verb by the PP node, if the dominance relations are to be consistent. But it makes no sense for the PP node to have a standard referent where it immediately dominates only a verb and an NP, but no preposition. And if the verb dominates the PP, then the verb also dominates the NP which serves as the object of the VP, which is impossible.In this sense, D-theory is clearly more restrictive than the theory of [Lasnik and Kupin 771 , at least as interpreted by [Chomsky 81 ] , where reanalysis is done by adding an additional monostring to the existing Restricted Phrase Marker and eliminating others.In this case, the dominationrelations implied by the new analysis need not be consistent with those implicit in the prere, analysis RPM. constraints on d-theory: a brief discussion: While we will not discuss this issue here at length, our current account of D-theory includes a set of stipulated constro;-'-'hat further restrict where new domination predications can be added to a description. These constraints include the following: The Rightmost Daughter Constraint, that only the rightmost daughter of a node can be lowered under a sibling node at any given point in the parsing process; and The No Crossover Constraint, that no node can be lowered under a sibling which is not contiguous to it, and some others.As viewed from the point of view of the standard referent, we believe that a D-theory parser will appear to operate, by and large, just like a tree building deterministic parser, until it creates some structure whose standard referent must be changed. From the parser's point of view, it will scan base templates left-to-right for the most part, initiating some in a top-down manner, some in a bottom-up manner, until it finds itself unable to fill the next template slot somehow or other. At this point some mechanism must decide what additional predications to add to allow the parser to proceed. The functional force of the stipulations discussed above is to sevelely restrict the range of possibilities that can be considered in such a situation. Indeed, we would be delighted if it turned out to be the case that the parser can never consider more than several possibilities at any point that such an operation will be performed.It is particularly worthy of note that these two constraints interact to predict that the range of constructions that can be reanalyzed in the manner discussed in the last section is severely circumscribed, and that this prediction is borne out (see {Quirk, Greenbaum, Leech & Svartvik 72], §12.64).These two constraints together predict that verb reanalysis is possible only when a single constituent precedes the trigger for reanalysis:Suppose that there were two constituents which preceded the trigger for reanalysis, i.e. that the order of constituents in the VP iswhere C1 and C2 are the two constituents, and T is the trigger. Then these two constituents would be attached to the VP whose head is V before T is encountered, causing the parser (before attaching T) to assert two new predications which would have the force of shifting the two constituents into the verb. But which predication could be parser add first? If it asserts that D(V, CI), this violates the Rightmost Daughter Constraint, because only C2 can be lowered under a sibling. But if the parser first asserts D(V, C2) then C2 crosses over CI, which is prohibited by the No Crossover Constraint. Therefore, only constituent can have been attached before the reanalysis occurs. a deterministic approach to coordination: We now turn from the consequences of expressing syntactic structure in terms of domination to the use of names within Dtheory. As stated above, it is this use of names which really makes D-theory analyses descriptions, and not merely directed acyclic graphs. The power of naming can be demonstrated most clearly by investigating some implications of the use of names for the representation of coordinate constructions, i.e. conjunction phenomena and the like.Coordinate constructions are infamous for being highly ambiguous given only syntactic constraints; standard techniques for parsing coordinate structures, e.g. [Woods 73] , are highly combinatoric, and it would seem inherent in the phenomenon that tree-building parsers must do extensive search to build all syntactically possible analyses. (See, e.g. the analysis of [Church & Patil 1982] .)One widely-used approach which eliminates much of this seemingly inherent search is to use extensive semantic and pragmatic interaction interleaved with the parsing process to quickly prune unpromising search paths. While Parsifal made use of exactly such interactions in other contexts, e.g. to correctly place prepositional phrases, such interactions seem to demand at least implicitly building syntactic structure which is discarded after some choice is made by higher-level cognitive components. Because this is counter to at least the spirit of the determinism hypothesis, it would be interesting if the syntactic analysis of coordinate structures could be made autonomous of higher-level processes.There are more central problems for a deterministic analysis of conjunction, however. Techniques which make use of the lookahead provided by buffering constituents can deterministically handle a perhaps surprising range of coordinate phenomena, as first demonstrated by the YAP parser [Church 80 ], but there appear to be fundamental limitations to what can be analyzed in this way. The central problem is that a tree building deterministic parser cannot examine the context necessary to determine what is conjoined to what without constructing nodes which may turn out to be spurious, given the (ultimate) correct analysis.In what follows, we will illustrate each of these problems in more detail and sketch an approach to the analysis of coordinate structures which we believe can be extended to handle such structures deterministically and without semantic interaction.Consider the problem of analyzing sentences like (11.1-2). These two sentences are identical at the level of preterminal symbols; they differ only in the particular lexical items chosen as nouns, with the schematic lexical structure indicated by (11.3). However, (11.1) has the favored reading that the apples, pears and cherries are all ripe and from local orchards, while in (11.2), only the cheese is ripe and only the cider is from local orchards. From this, it is clear that (11.1) is read as a conjunction of three nouns within one NP, while (11.2) is read as a conjunction of three individual NPs, with structures as indicated by (ll.Ia,2a). We assume here, crucially, that constituents in coordination are all attached to the same constituent; they can be thought of as "stacking" in a plane orthogonal to the standard referent, as [Chomsky 82] suggests. The conjunction itself is attached to the rightmost of the coordinate structures.(ll.1) They sell ripe apples, pears, and cherries from local orchards.(1 l.la) They sell [NP ripe [N apples] Thus, it would seem that to determine the level at which the structures are conjoined requires much pragmatic knowledge about fruit, flowers and the like.Note also that while (11.1-2) have particular primary readings, one needs to consider these sentences carefully to decide what the primary reading is. This is suggestive of the kind of syntactic vagueness that VanLehn argues characterizes many judgements of quantifier scope [VanLehn 78]. Note, however, that most evidence suggests that quantifier scope is not represented directly in syntactic structure, but is interpreted from that structure. For the readings of (11.1-2) to be vague in this way, the structures of (I l.la-2a) must be interpreted from syntactic structure, and not be part of it. It turns out that Dtheory, coupled with the assumption that the parser does not interact with semantic and pragmatic processing, provides an account which is consistent with these intuitions.But consider the D-theoretic analysis of (11.1); there are some surprises in store. Its representation will include predications like those of (12.1-8), where we are now careful to "unpack" informal names like "npl" to show that they consist of a content-free identifier and predications about the type of entity the identifier names. Here vpl is the name of a node whose head is "sell", apl an adjective phrase dominating "ripe", and ppl the PP "from local orchards." The analysis will also include predications about, the left-to-right order of the terminal string, which has been informally represented in (12.9); +X < Y" is to be read +X is the left of Y". We indicate the order of nonterminals here only for the sake of brevity; we use nl <n2as a shorthand forD(nl, 'cheese'); D(n2, 'bread'); 'cheese' < 'bread'.In particular, a D-theory analysis contains no explicit predications about left-right order of non-terminals.But given only the predications in (12), what can be said about the identities of the nodes named npl, np2, and np3? Under this description, the descriptions of npl, np2 and np3 are compatible descriptions; they are potentially descriptions of the same individual. They are all dominated by vpl, and each is an NP, so there is no conflict here, Each dominates a different noun, but several constituents of the same type can be dominated by the same node if they are in a coordinate structure (given the analysis of coordinate structures we assume) and if they are string adjacent. NI, n2 and n3 are string adjacent (given only (12)), so the fact that the nodes named npl, np2 and np3 dominate nouns which may turn out to be different does not make the descriptions of the NPs incompatible. (Indeed, if the nouns are viewed as a coordinate structure, then the structure of the nouns is the same as that of (11.1).) Furthermore, adjl is immediately to the left of and ppl is immediately to the right of all the nouns, so these constituents could be dominated by the same single NP that might dominate hi, n2 and n3 as well. Thus there is no information here that can distinguish npl from np2 from np3.The fact that the conjunction "and" is dominated by np3 does not block the above analysis. The addition of one domination predicate leaves it dominated by n3 (as well as np3, of course), thereby making n l, n2 and n3 a perfect coordinate structure, and leaving no barrier to npl, np2 and np3 being co-referent, But this means that the D-theory analysis of (11.1) has as standard referents both it and (11.2)! (This modifies our statement earlier in this paper about the uniqueness of the standard referent; we now must say that for each possible "stacking" of nodes, there is one standard referent.) For if npl, np2 and np3 corefer, then the analysis above shows that the structure described is exactly that of (11.2). There is also the possibility that just npl and np2 corefer, given the above analysis, which yields a reading where np2 is an appositive to npl, with npl and np3 coordinate structures (the structure of appositives is similar to that of coordinate structures, we assume); and the possibility that just np2 and np3 corefer, yielding a reading with npl and np2 coordinate structures, and np3 in apposition to np2. (The fact that we use a simplified phrase structure here is not an important fact. The analysis goes through equally as well with a full X-bar theoretic phrase component; the story is just much longer.)The upshot of this is that upon encountering constructions like (11), the parser can proceed by simply assuming that the structures are conjoined at the highest level possible, using different names for each of the potential highest level constituents. It can then analyze the (potentially) coordinate structures entirely independently of feedback from pragmatic and semantic knowledge sources. When higher cognitive processing of this description requires distinguishing at what level the structures are conjoined, pragmatics can be invoked where needed, but there need be no interaction with syntactic processes themselves. This is because, once again, it turns out if it is syntactically possible that structures should be conjoined at a lower level than that initially posited, the names of the potentially separate constituents simply can be viewed as aliases of the one node that does exist in the corresponding standard referent; in this case all predications about whatever node is named by the alias remain true, and thus once again no predications need to be revoked.We now see how it is that D-theory gives an account of the intuition that the fine structure of coordinations in vague, in the sense of VanLehn. For we have seen that pragmatics does not need to determine whether (e.g.) all the fruits in (11.1) are ripe or not for the syntactic analysis to be completed deterministically, exactly because the D-theory analysis leaves all (and, we also claim, only) the syntactically correct possibilities open. Thus the description given in (12) is appropriately vague between possible syntactic analyses of sentences like those schematized in (11.3). Thus, this new representation opens the way for a simple formal expression of the notion that some sentences may be vague in certain well defined ways, even though they are believed to be understood, and that this vagueness may not be resolved until a hearer's attention is called to the unresolved decision.7.3 The Problem of Nodes That Aren't There.While we can give only the briefest sketch here (the full story is quite long and complicated), exactly this use of names resolves yet another problem for the deterministic analysis of coordinate structures: To examine enough context (in the buffer) to decide what kind of structure is conjoined with what, a troe-building parser will often have to go out on a limb and posit the existence of nodes which may turn out not to exist after all. For example, if a tree-building parser has analyzed the inputs shown in (13.1-2) up to "worms" and has seen "and" and "frogs" in the (13.1) Birds eat small worms and frogs eat small flies. (13.2) Birds eat small worms and frogs.buffer, it will need to posit that "frogs" is a full NP to check to see if the pattern[conjunction] [NPI [verblis fulfilled, and thus if an S should be created with the NP as its head. But if the input is not as in (13.1), but as in (13.2), then positing the NP might be incorrect, because the correct analysis may be a noun-noun conjunction of "worms" and "frogs', (with the reading that birds eat worms and frogs, both of which are small).Of course, there is a second problem here for a tree-building parser, namely that (13.2) has a second reading which is an "NP and NP" conjunction. As we have seen above, there is no corresponding problem for a D-theory parser, because if it merely posits an NP dominating "frogs', the structure which will result for (13.2) is appropriately vague between both the NP reading and the noun reading of "frogs" (i.e. between the readings where the frogs are just plain frogs and where the frogs are small.)But the solution to the second problem for a D-theory parser is also a solution to the first! After seeing "and" and "frogs" in its buffer, a D-theory parser can simply posit an NP node dominating "frogs" and continue. If the input proceeds as in (13.1), then the parser will introduce an S node and assert that it dominates the new NP. This will make the descriptions of the NPs dominating "worms" and dominating "frogs" incompatible, i.e. this will assure that there really are two NPs in the standard referent. If the input proceeds as in (13.2), a D-theory parser will state that the node referred to by the new name is dominated by the previous VP, resulting in the structure described immediately above. To summarize, where a treebuilding parser might be misled into creating a node which might not exist at all, there is no corresponding problem for a D-theory parser. summing up'. d-theory on one foot: This paper has described a new theory of natural language syntax and parsing which argues that the proper output of syntactic analysis is not a tree structure per se, but rather a description of such structures. Rather than constructing a tree, a natural language parser based on these ideas will construct a single description which can be viewed as a partial description of each of a family of trees.The two key ideas that we have presented here arc:(1) An analysis of a syntactic structure consists primarily of predications of the form "node X dominates node Y', and not the more traditional "node. X immediately dominates node Y'; syntactic analysis never says more than that node X is somewhere above node Y.(2) Because this is a description, two names used to refer to syntactic structures can always co-refer if their descriptions are compatible, and furthermore, it is impossible to block the possibility of coreferenec if the descriptions are compatible.These two ideas, taken together, imply that during the process of analyzing the structure of a given utterance, merely adding to the emerging description may change the set of trees ultimately described (just as adding "honest" to the phrase "all politicians" may radically change the set described). We have also sketched some implications of this theory that not only suggest a new analysis of coordinate structures, but also suggest that coordinate structures might be much easier to analyze than current parsing techniques would suggest.We are currently working to flesh out the analyses presented above. We arc also working on an analysis of gapping and elision phenomena which seems to fall naturally out of this framework. This new analysis is surprising in that it makes crucially use of descriptions even less fully specified than those we have discussed in this paper, by using the notations we have introduced here to fuller advantage. These emerging analyses move yet further away from the traditional view of either trees or phrase markers as an appropriate framework for expressing syntactic generalizations. Appendix:
null
null
null
null
{ "paperhash": [ "hindle|deterministic_parsing_of_syntactic_non-fluencies", "shipman|towards_minimal_data_structures_for_deterministic_parsing", "marcus|a_theory_of_syntactic_recognition_for_natural_language", "vanlehn|determining_the_scope_of_english_quantifiers", "quirk|a_grammar_of_contemporary_english", "brady|natural_language_generation_as_a_computational_problem:_an_introduction", "church|coping_with_syntactic_ambiguity_or_how_to_put_the_block_in_the_box_on_the_table", "chomsky|some_concepts_and_consequences_of_the_theory_of_government_and_binding", "lasnik|a_restrictive_theory_of_transformational_grammar" ], "title": [ "Deterministic Parsing of Syntactic Non-fluencies", "Towards Minimal Data Structures for Deterministic Parsing", "A theory of syntactic recognition for natural language", "Determining the Scope of English Quantifiers", "A Grammar of contemporary English", "Natural Language Generation as a Computational Problem: an Introduction", "Coping with Syntactic Ambiguity or How to Put the Block in the Box on the Table", "Some Concepts and Consequences of the Theory of Government and Binding", "A RESTRICTIVE THEORY OF TRANSFORMATIONAL GRAMMAR" ], "abstract": [ "It is often remarked that natural language, used naturally, is unnaturally ungrammatical. *Spontaneous speech contains all manner of false starts, hesitations, and self-corrections that disrupt the well-formedness of strings. It is a mystery then, that despite this apparent wide deviation from grammatical norms, people have little difficulty understanding the non-fluent speech that is the essential medium of everyday life. And it is a still greater mystery that children can succeed in acquiring the grammar of a language on the basis of evidence provided by a mixed set of apparently grammatical and ungrammatical strings.", "The determinism hypothesis suggests that natural language may be parsed in a single pass without resort to backtracking techniques. The PARSIFAL system, developed by Marcus, incorporates this philosophy in an English language parser. Here, we show that the data structures used by this parser may be considerably simplified resulting in more elegant grammatical specifications.", "Abstract : Assume that the syntax of natural language can be parsed by a left-to-right deterministic mechanism without facilities for parallelism or backup. It will be shown that this 'determinism' hypothesis, explored within the context of the grammar of English, leads to a simple mechanism, a grammar interpreter. (Author)", "Abstract : One can represent the meaning of English sentences in a formal logical notation such that the translation of English into this logical form is simple and general. This report covers a particular kind of meaning, namely quantifier scope, and for a particular part of the translation, namely the syntactic influence on the translation. Three different logical forms are presented, and their translation rules are examined. One of the logical forms is predicate calculus. The translation rules for it were developed by Robert May (may 1977). The other two logical forms are Skolem form and a simple computer programming language. The translation rules for these two logical forms are new. All three sets of translation rules are shown to be general, in the sense that the same rules express the constraints that syntax imposes on certain other linguistic phenomena. For example, the rule that constrain the translation into Skolem form are shown to constrain definite np anaphora as well. A large body of carefully collected data is presented, and used to assess the empirical accuracy of each of the theories.", "The publication of this important volume fills the need for an up-to-date survey of the entire scope of English syntax. Though it falls short of a perfectly balanced treatment of the whole system, it touches upon all the essential topics and treats in depth a number of crucial problems of current interest such as case, ellipsis, and information focus. Even the publishers’ claims are vindicated to a surprising degree. The statement that it “constitutes a standard reference grammar” is reasonably well justified. Recent investigations, including the authors’ own research, are integrated into the “accumulated grammatical tradition” quite effectively. But whether it is “the fullest and most comprehensive synchronic description of English grammar ever written” is arguable. No one acquainted with Poutsma’s work would agree with that. Very advanced foreign students o r native speakers of English who want to learn about basic grammar will find some of thel sections suitable for their needs, such as the lesson about restrictive and nonrestrictive relative clauses, though even here some of the explanations require very intensive study. Most of the chapters are rather like an advanced textbook for teachers or linguists. The organization and viewpoint give the impression of a carefully planned university lecture supplemented by diagrams, charts, and lists. A good example is the lesson on auxiliaries and verb phrases, which starts with a set of sample sentences demonstrating that “should see” and “happen to see” behave differently under various transformations and expansions. After the essential concepts are explained and exemplified-lexical verb, semi-auxiliary, operator, and the like-lists and paradigms are given as in the usual reference work. A particularly useful feature of this chapter is the outline of modal auxiliaries with examples of their divergent meanings.", "This chapter contains sections titled: Introduction, Results for Test Speakers, A Computational Model, The Relationship Between the Speaker and the Linguistics Component, The Internal Structure of the Linguistic Component, An Example, Contributions and Limitations", "Sentences are far more ambiguous than one might have thought. There may be hundreds, perhaps thousands, of syntactic parse trees for certain very natural sentences of English. This fact has been a major problem confronting natural language processing, especially when a large percentage of the syntactic parse trees are enumerated during semantic/pragmatic processing. In this paper we propose some methods for dealing with syntactic ambiguity in ways that exploit certain regularities among alternative parse trees. These regularities will be expressed as linear combinations of ATN networks, and also as sums and products of formal power series. We believe that such encoding of ambiguity will enhance processing, whether syntactic and semantic constraints are processed separately in sequence or interleaved together.", "Noam Chomsky, more than any other researcher, has radically restructured the study of human language over the past several decades. While the study of government and binding is an outgrowth of Chomsky's earlier work in transformational grammar, it represents a significant shift in focus and a new direction of investigation into the fundamentals of linguistic theory.This monograph consolidates and extends this new approach. It serves as a concise introduction to government-binding theory, applies it to several new domains of empirical data, and proposes some revisions to the principles of the theory that lead to greater unification, descriptive scope, and explanatory depth.Earlier work in the theory of grammar was concerned primarily with rule systems. The accent in government-binding theory, however, is on systems of principles of universal grammar. In the course of this book, Chomsky proposes and evaluates various general principles that limit and constrain the types of rules that are possible, and the ways they interact and function. In particular, he proposes that rule systems are in fact highly restricted in variety: only a finite number of grammars are attainable in principle, and these fall into a limited set of types.Another consequence of this shift in focus is the change of emphasis from derivations to representations. The major topic in the study of syntactic representations is the analysis of empty categories, which is a central theme of the book. After his introductory comments and a chapter on the variety of rule system, Chomsky takes up, in turn, the general properties of empty categories, the functional determination of empty categories, parasitic gaps, and binding theory and the typology of empty categories.Noam Chomsky is Institute Professor at MIT. The book is the sixth in the series Linguistic Inquiry Monographs, edited by Samuel Jay Keyser.", "A set theoretic formalization of a transformational theory in the spirit of Chomsky’s LSLT is presented. The theory differs from Chomsky’s, and more markedly from most current theories, in the extent to which restrictions are imposed on descriptive power. Many well-justified and linguistically significant limitations on structural description and structural change are embodied in the present formalization. We give particular attention to the constructs Phrase Marker and Transformational Cycle providing modifications which offer increases in both simplicity and explanatory power." ], "authors": [ { "name": [ "Donald Hindle" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "David W. Shipman", "Mitchell P. Marcus" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Mitchell P. Marcus" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "K. VanLehn" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Quirk" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. Brady", "R. Berwick" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Kenneth Ward Church", "R. Patil" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Noam Chomsky" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Howard Lasnik", "J. Kupin" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "5222302", "5399196", "6616065", "122721053", "144541309", "63032395", "9330325", "143496935", "61406059" ], "intents": [ [], [], [ "background" ], [ "background" ], [], [], [ "background" ], [ "background" ], [ "background" ] ], "isInfluential": [ false, false, false, false, false, false, false, false, false ] }
Problem: The paper outlines a theory of linguistic structure called Description theory (D-theory) which aims to provide a framework for explaining the syntax and semantics of natural language in a computational manner. Solution: The hypothesis is that by using descriptions of syntactic structures with the concept of dominance rather than direct dominance, D-theory can offer a more flexible and accurate approach to parsing coordinate structures and handling syntactic phenomena that resist traditional tree-building parsers.
500
0.428
null
null
null
null
null
null
null
null
60d4f88ac916c09edf6437a7f4212b898b7c1292
13496735
null
Knowledge Structures in {UC}, the {UNIX} Consultant
The knowledge structures implemented in UC, the UNLX Consultant are sufficient for UC to reply to a large range of user queries in the domain of the UNIX operating system. This paper describes how these knowledge structures are used in the natural language tasks of parsing, reference, planning, goal detection, and generation, and ~ow they are organized to enable efficient access even with the large database of an expert system. The structuring of knowledge to provide direct answers to common queries and the high usability and efficiency of knowledge structures allow UC to hold an interactive conversation with a user.
{ "name": [ "Chin, David N." ], "affiliation": [ null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
12
9
null
UC is a natural language program that converses in English with users in the domain of the UNIX operating system. UC provides information on usage of system utilities, UNIX terminology, and plans for accomplishing specific tasks in the UNIX environment, all upon direct query by the user. In order to accomplish these tasks, UC must perforce have a considerable knowledge base, a large part of which is particular to the UNIX domain. The specific representations used in this knowledge base are essential to the successful operation of UC. Not only are the knowledge structures used in parsing, inference, planning, goal detection, and generation, but also the format of representation must permit the high efficiency in access and processing of the knowledge that is required in an interactive system like UC. This paper describes the details of this representation scheme and how it manages to satisfy these goals of usability and efficiency. Other aspects of the UC system are described in Arens (1982}, Faletti (1982}, Jacobs (1983}, Rau {1983) , and Wilensky and Arens (1980a and b) . An overview of the UC system can be found in Wilensky (1982) .• UNIX is Lradem,trk of Bell Labor~.tone$ t This research wu sponsored ia part by the O~¢e of NavLl Re~etrcb under coBtrLct N00014-80-C-0732 ~ad the NLt,oa=d Scieace Foaadztiou =ader grant MCSTg-06543.
Many different representation schemes were considered for UC. In the past, expert systems have used relations in a database (e.g. the UCC system of Douglass and Hegner, 1982) , production rules and/or predicate calculus, for knowledge representation. Although these formats have their strong points, it was felt that none provided the flexibility needed for the variety of tasks in UC. Relations in a database are good for large amounts of data, but the database query languages which must be used for access to the knowledge are usually poor representation languages. Production rules encode procedural knowledge in an easy to use format, but do not provide much help for representing declarative knowledge. Predicate calculus provides built-in inference mechanisms, but do not provide sufficient mechanism for representing the linguistic forms found in natural language. Also considered were various representation languages, in particular KL-one (Schmolze and Brachman, 1981) . However at the time, these did not seem to provide facilities for efficient access in very large knowledge bases. The final decision was to use a framelike representation where some of the contents are based on Schank's conceptual dependencies, and to store the knowledge structures in PEARL databases (PEARL is an AI package developed at Berkeley that provides efficient access to Lisp representations through hashing mechanisms, c.f. Deering, et. al., 1981 and .Some of the knowledge structures used in UC are refinements of formats developed by Joe Faletti and Peter Norvig. Yigal A.rens is responsible for the underlying memory structure used in UC and of course, this project would not be possible without the guidance and advice of Robert Wilensky.
Based on Minsky's theory of frames, the knowledge structures in UC are frames which have a slot-filler format. The idea is to store all relevant information about a particular entity together for efficient access. For example the following representation for users has the slots userid, home-directory, and group which are filled by a userid, a directory, and a set of group-id's respectively.(create expanded person user (user-id user-id) (home-directory directory) {group setof group-id))In addition, users inherit the slots of person frames such as a person's name.To see how the knowledge structures are actually used, it is instructive to follow the processing of queries in some detail. UC first parses the English input into an internal representation. For instance, the query of example one is parsed into a question frame with the single slot, cd, which is filled by a planfor frame. The question asks what is the plan for (represented as a planfor with an unknown method) achieving the result of changing the write protection (mesg state) of a terminal (terminall which is actually a frame that is not shown).(question (cd (planfor (result (state-change (actor terminall) (state-name mesg) (from unspecified) (to unspecified))) (method *unknown*))))Once the input is parsed, UC which is a data driven program looks in its data base to find out what to do with the representation of the input. An assertion frame would normally result in additions to the database and an Imperative might result in actions (depending on the goal analysis}. In this case, when UC sees a question with a planfor where the method is unknown, it looks in its database for an out-planfor with a query slot that matches the result slot of the planfor in the question. This knowledge is encoded associatively in a memoryassociation frame where the recall-key is the associative component and the cluster slot contains a set of structures which are associated with the structure in the recall-key slot.(memory-association (recall-key {question (cd (planfor (result ?cone) (method *unknown*))))) {cluster ((out-planfor (query ?cone) (plan ?*any*)))))The purpose of the memory-association frame is to simulate the process of reminding and to provide very flexible control flow for UC's data driven processor. After the question activates the memory-association, a new outpianfor is created and added to working memory. This out-planfor in turn matches and activates the following knowledge structure in UC's database:(out-planfor (query (state-change (actor terminal) (state-name mesg} (from ?from-state) (to ?to-state))) (plan (output (cd (planfor67 planfor68)))))The meaning of this out-planfor is that if a query about a state-change involving the mesg state of a terminal is ever encountered, then the proper response is the output frame in the plan slot. All output frames in UC are passed to the generator• The above output frame contains the planfors numbered 67 and 68: This planfor states that a plan for changing the mesg state of a terminal from on to off is for the user co send the command rnes~I to UNIX with the argument "y". Planfor 68 is similar, only with the opposite result and with argument "n". In general, UC contains many of these planfors which define the purpose (result slot) of a plan (method slot). The plan is usually a simple command although there are more complex meta plans for constructing sequences of simple commands such as might be found in a UNIX pipe or in conditionals. In UC, out-planfors represent "compiled" answers in an expert consultant where the consultant has encountered a particular query so often that the consultant already has a rote answer prepared• Usually the question that is in the query slot of the out-planfor is similar to the result of the planfor that is in the output frame in the plan slot of the out-planfor. However this is not necessarily the case, since the out-planfor may have anything in its plan slot. For example some queries invoke UC's interface with UNIX (due to Margaret Butler} to obtain specific information for the user. The use of memory-associations and out-planfors in UC provides a direct association between common user queries and their solutions. This direct link enables UC to process commonplace queries quickly. When UC encounters a query that cannot be handled by the outplanfors, the planning component of UC (PANDORA, c.f. Faletti, 1982) is activated• The planner component uses the information in the UC databases to create individualized plans for specific user queries. The description of that proems is beyond the scope of this paper. The representation of definitions requires a different approach than the above representations for actions and plans. Here one can take advantage of the practicality of terminology in a specialized domain such as UNIX. Specifically, objects in the UNIX domain usually have definite functions which serve well in the definition of the object. In example two, the type declaration of a search-path includes a use slot for the search-path which contains information about the main function of search paths. The following declaration defines a searc:.-~n as a kind of functional-object with a path slot that contains a set of directories and a ~zse slot which says that search paths are used in searching for programs by UNL~.(create expand'ed functional-object search-path (path setof directory) (use ($search (actor *Unix*) (object program} {location ?search-path)))• . . )Additional information useful in generating a definition can be found the slots of a concept's declaration. These slots describe the parts of a concept and are ordered in terms of importance. Thus in the example, the fact tha~ a search-path is composed of a set of directories was used in the definition given in the examples. Other useful information for building definitions i~ encoded in the hierarchical structure of concepts in UC. This is not used in the above example since a search-path is only an expanded version of the theoretical concept, functional-object. However with other objects such a.~ directory, the fact that directory is an expanded version of a file {a directory is a file which is ,sed to store other files) is actually used in the definition.The third type of query involves failed preconditions of plans or missing steps in a plan. In UC the preconditions of a plan are listed in a preeonds frame. For instance, in example 3 above, the relevant preconds frame is:(preconds (plan (mtrans (actor *user*) (object (command (name rmdir) (args (?director/name)) (input stdin) (output stdout} (diagnostic s~dout))) (from *user*) (to ,Unix*))) (are ((state (actor (all (var ?file) (desc (file)) (pred (inside-of (object ?directoryname))))}) (state-name physical-state) (value non-existing}) ..This states that one of the preconditions for removing a directory is that it must be empty. In analyzing the example, UC first finds the goal of the user, namely to delete the directory Trap. Then from this goal, UC looks for a plan for that goal among planfors which have that goal in their result slots. This plan is shown above.Once the plan has been found, the preconds for that plan are checked which in this case leads to the fact that a directory must be empty before it can be deleted. Here UC actually checks with UNIX, looking in the user's area for the directory Trap and discovers that this precondition is indeed violated. If UC had not been able to find the directory, UC would suggest that the user personally check for the preconditions. Of course if the first precondition was found to be satisfied, the next would be checked and so on. In a multi-step plan, UC would also verify that the steps of the plan had been carried out in the proper sequence by querying the user or checking with UNIX.
The first step in the design of knowledge structures involves determining what forms of knowledge will be needed. In this case, the domain of conversation for a UNIX consultant is specific enough that it reduces the range of knowledge structures necessary for the task. Some insight into the kinds of knowledge structures that are needed can be gleaned from looking at the variety of questions which users actually ask. Since UC is aimed at the naive user of UNIX, a majority of the queries UC receives are of the following forms (taken from actual UC sessions}: User: How can [ change the write protection on my terminal? UC: To turn on write permission on your terminal, type 'mesg y'.To turn off write permission on your terminal, type 'mesg n'. User: What is a search path? UC: A search path in UNIX is a list of directories in which the operating system searches for programs to execute. User: Why can't I remove the directory Trap? UC: The directory Trap must he empty before the directory can be deleted.Questions of the first form, asking how to do something, are usually requests for the names and/or usage of UNIX utilities. The user generally states the goals or results that are desired, or the actions to be performed and then asks for a specific plan for achieving these wishes. So to respond to how questions, UC must encode in its database a large number of plans for accomplishing desired results or equivalently, the knowledge necessary to generate those plans as needed.The second question type is a request for the definition of certain UNL~ or general operating systems terminology. Such definitions can be provided easily by canned textual responses. However UC generates all of its output. The expression of knowledge in a format that is also useful for generation is a much more difficult problem than simply storing canned answers.In the third type of query, the user describes a situation where his expectations have failed to be substantiated and asks UC to explain why. Many such queries involve plans where preconditions of those plans have been violated or steps omitted from the plans. The job that UC has is to determine what the user was attempting to do and then to determine whether or not preconditions may have been violated or steps left out by the user in the execution of the plans.Besides the ability to represent all the different forms of knowledge that might be encountered, knowledge structures should be appropriate to the tasks for which they will be used. This means that it should be easy to represent knowledge, manipulate the knowledge structures, use them in processing, and do all that efficiently in both time and space. In UC, these requirements are particularly hard to meet since the knowledge structures are used for so many diverse purposes.The knowledge structures in UC are stored in PEARL databases which provide efficient access by hash indexing.Frames are indexed by combinations of the frame type and/or the contents of selected slots. For instance, the planfor of example one is indexed using a hashing key based on the state-change in the planfor's result slot. This planfor is stored by the fact that it is a planfor for the state-change of a terminal's mesg state. This degree of detail in the indexing scheme allows this planfor to be immediately recovered whenever a reference is made to a state-change in a terminars mesg state. Similarly, a memory-association is indexed by the filler of the recall-key slot, an out-planfor is indexed using the contents of the query slot of the out-planfor, and a preconds is indexed by the plan in the plan slot of the preconds. Indeed all knowledge structures in UC have associated with them one or more indexing schemes which specify how to generate hashing keys for storage of the knowledge structure in the UC databases. These indexing methods are specified at the time that the knowledge structures are defined. Thus although care must be taken to choose good indexing schemes when defining the structure of a frame, the indexing scheme is used automatically whenever another instance of the frame is ~dded to the UC databases. Also, even though the indexing schemes for large structures like planfors involve many levels of embedded slots and frames, simpler knowledge structures usually have simpler indexing schemes. For example, the representation for users in UC are stored in two ways: by the fact that they are users and have a specific account name, and by the fact that they are users and have some given real name. The basic idea behind using these complex indexing schemes is to simulate a real associative memory by using the hashing mechanisms provided in Pearl databases.This associative memory mechanism fits well with the data-driven control mechanism of UC and is usel'ul for a great variety of tasks. For example, goal analysis of speech acts can be done through this associative mechanism:(memory-association (recall-key (assertion (cd (goal (planner ?person} (objective ?obj )))) (cluster ((out-pianfor (cd ?obi)))))In the above example {provided by Jim Mayfield), UC • analyzes the user's statement of wanting to do something as a request for UC to explain how to achieve that goal.UC is a working system which is still under development. In size, UC is currently two and a half megabytes of which half a megabyte is FRANZ lisp. Since the knowledge base is still growing, it is uncertain how much of an impact even more knowledge will have on the system especially when the program becomes too large to fit in main memory. In terms of efficiency, queries to UC take between two and seven seconds of CPU time on a V.~X 11/780. Currently, all the knowledge in UC is hand coded, however efforts are under way to aatomate the process.
The knowledge structures developed for UC have so far shown good efficiency in both access time and space usage within the limited domain of processing queries to a Unix Consultant. The knowledge structures fit well in the framework of data-driven programming used in UC. Ease of use is somewhat subjective, but beginners have been able to add to the UC knowledge base after an introductory graduate course in AI. Efforts underway to extend UC in such areas as dialogue will further test the merit of this representation scheme.
Main paper: introduction: UC is a natural language program that converses in English with users in the domain of the UNIX operating system. UC provides information on usage of system utilities, UNIX terminology, and plans for accomplishing specific tasks in the UNIX environment, all upon direct query by the user. In order to accomplish these tasks, UC must perforce have a considerable knowledge base, a large part of which is particular to the UNIX domain. The specific representations used in this knowledge base are essential to the successful operation of UC. Not only are the knowledge structures used in parsing, inference, planning, goal detection, and generation, but also the format of representation must permit the high efficiency in access and processing of the knowledge that is required in an interactive system like UC. This paper describes the details of this representation scheme and how it manages to satisfy these goals of usability and efficiency. Other aspects of the UC system are described in Arens (1982}, Faletti (1982}, Jacobs (1983}, Rau {1983) , and Wilensky and Arens (1980a and b) . An overview of the UC system can be found in Wilensky (1982) .• UNIX is Lradem,trk of Bell Labor~.tone$ t This research wu sponsored ia part by the O~¢e of NavLl Re~etrcb under coBtrLct N00014-80-C-0732 ~ad the NLt,oa=d Scieace Foaadztiou =ader grant MCSTg-06543. speeifleations for the representation: The first step in the design of knowledge structures involves determining what forms of knowledge will be needed. In this case, the domain of conversation for a UNIX consultant is specific enough that it reduces the range of knowledge structures necessary for the task. Some insight into the kinds of knowledge structures that are needed can be gleaned from looking at the variety of questions which users actually ask. Since UC is aimed at the naive user of UNIX, a majority of the queries UC receives are of the following forms (taken from actual UC sessions}: User: How can [ change the write protection on my terminal? UC: To turn on write permission on your terminal, type 'mesg y'.To turn off write permission on your terminal, type 'mesg n'. User: What is a search path? UC: A search path in UNIX is a list of directories in which the operating system searches for programs to execute. User: Why can't I remove the directory Trap? UC: The directory Trap must he empty before the directory can be deleted.Questions of the first form, asking how to do something, are usually requests for the names and/or usage of UNIX utilities. The user generally states the goals or results that are desired, or the actions to be performed and then asks for a specific plan for achieving these wishes. So to respond to how questions, UC must encode in its database a large number of plans for accomplishing desired results or equivalently, the knowledge necessary to generate those plans as needed.The second question type is a request for the definition of certain UNL~ or general operating systems terminology. Such definitions can be provided easily by canned textual responses. However UC generates all of its output. The expression of knowledge in a format that is also useful for generation is a much more difficult problem than simply storing canned answers.In the third type of query, the user describes a situation where his expectations have failed to be substantiated and asks UC to explain why. Many such queries involve plans where preconditions of those plans have been violated or steps omitted from the plans. The job that UC has is to determine what the user was attempting to do and then to determine whether or not preconditions may have been violated or steps left out by the user in the execution of the plans.Besides the ability to represent all the different forms of knowledge that might be encountered, knowledge structures should be appropriate to the tasks for which they will be used. This means that it should be easy to represent knowledge, manipulate the knowledge structures, use them in processing, and do all that efficiently in both time and space. In UC, these requirements are particularly hard to meet since the knowledge structures are used for so many diverse purposes. the choice: Many different representation schemes were considered for UC. In the past, expert systems have used relations in a database (e.g. the UCC system of Douglass and Hegner, 1982) , production rules and/or predicate calculus, for knowledge representation. Although these formats have their strong points, it was felt that none provided the flexibility needed for the variety of tasks in UC. Relations in a database are good for large amounts of data, but the database query languages which must be used for access to the knowledge are usually poor representation languages. Production rules encode procedural knowledge in an easy to use format, but do not provide much help for representing declarative knowledge. Predicate calculus provides built-in inference mechanisms, but do not provide sufficient mechanism for representing the linguistic forms found in natural language. Also considered were various representation languages, in particular KL-one (Schmolze and Brachman, 1981) . However at the time, these did not seem to provide facilities for efficient access in very large knowledge bases. The final decision was to use a framelike representation where some of the contents are based on Schank's conceptual dependencies, and to store the knowledge structures in PEARL databases (PEARL is an AI package developed at Berkeley that provides efficient access to Lisp representations through hashing mechanisms, c.f. Deering, et. al., 1981 and . the implementation: Based on Minsky's theory of frames, the knowledge structures in UC are frames which have a slot-filler format. The idea is to store all relevant information about a particular entity together for efficient access. For example the following representation for users has the slots userid, home-directory, and group which are filled by a userid, a directory, and a set of group-id's respectively.(create expanded person user (user-id user-id) (home-directory directory) {group setof group-id))In addition, users inherit the slots of person frames such as a person's name.To see how the knowledge structures are actually used, it is instructive to follow the processing of queries in some detail. UC first parses the English input into an internal representation. For instance, the query of example one is parsed into a question frame with the single slot, cd, which is filled by a planfor frame. The question asks what is the plan for (represented as a planfor with an unknown method) achieving the result of changing the write protection (mesg state) of a terminal (terminall which is actually a frame that is not shown).(question (cd (planfor (result (state-change (actor terminall) (state-name mesg) (from unspecified) (to unspecified))) (method *unknown*))))Once the input is parsed, UC which is a data driven program looks in its data base to find out what to do with the representation of the input. An assertion frame would normally result in additions to the database and an Imperative might result in actions (depending on the goal analysis}. In this case, when UC sees a question with a planfor where the method is unknown, it looks in its database for an out-planfor with a query slot that matches the result slot of the planfor in the question. This knowledge is encoded associatively in a memoryassociation frame where the recall-key is the associative component and the cluster slot contains a set of structures which are associated with the structure in the recall-key slot.(memory-association (recall-key {question (cd (planfor (result ?cone) (method *unknown*))))) {cluster ((out-planfor (query ?cone) (plan ?*any*)))))The purpose of the memory-association frame is to simulate the process of reminding and to provide very flexible control flow for UC's data driven processor. After the question activates the memory-association, a new outpianfor is created and added to working memory. This out-planfor in turn matches and activates the following knowledge structure in UC's database:(out-planfor (query (state-change (actor terminal) (state-name mesg} (from ?from-state) (to ?to-state))) (plan (output (cd (planfor67 planfor68)))))The meaning of this out-planfor is that if a query about a state-change involving the mesg state of a terminal is ever encountered, then the proper response is the output frame in the plan slot. All output frames in UC are passed to the generator• The above output frame contains the planfors numbered 67 and 68: This planfor states that a plan for changing the mesg state of a terminal from on to off is for the user co send the command rnes~I to UNIX with the argument "y". Planfor 68 is similar, only with the opposite result and with argument "n". In general, UC contains many of these planfors which define the purpose (result slot) of a plan (method slot). The plan is usually a simple command although there are more complex meta plans for constructing sequences of simple commands such as might be found in a UNIX pipe or in conditionals. In UC, out-planfors represent "compiled" answers in an expert consultant where the consultant has encountered a particular query so often that the consultant already has a rote answer prepared• Usually the question that is in the query slot of the out-planfor is similar to the result of the planfor that is in the output frame in the plan slot of the out-planfor. However this is not necessarily the case, since the out-planfor may have anything in its plan slot. For example some queries invoke UC's interface with UNIX (due to Margaret Butler} to obtain specific information for the user. The use of memory-associations and out-planfors in UC provides a direct association between common user queries and their solutions. This direct link enables UC to process commonplace queries quickly. When UC encounters a query that cannot be handled by the outplanfors, the planning component of UC (PANDORA, c.f. Faletti, 1982) is activated• The planner component uses the information in the UC databases to create individualized plans for specific user queries. The description of that proems is beyond the scope of this paper. The representation of definitions requires a different approach than the above representations for actions and plans. Here one can take advantage of the practicality of terminology in a specialized domain such as UNIX. Specifically, objects in the UNIX domain usually have definite functions which serve well in the definition of the object. In example two, the type declaration of a search-path includes a use slot for the search-path which contains information about the main function of search paths. The following declaration defines a searc:.-~n as a kind of functional-object with a path slot that contains a set of directories and a ~zse slot which says that search paths are used in searching for programs by UNL~.(create expand'ed functional-object search-path (path setof directory) (use ($search (actor *Unix*) (object program} {location ?search-path)))• . . )Additional information useful in generating a definition can be found the slots of a concept's declaration. These slots describe the parts of a concept and are ordered in terms of importance. Thus in the example, the fact tha~ a search-path is composed of a set of directories was used in the definition given in the examples. Other useful information for building definitions i~ encoded in the hierarchical structure of concepts in UC. This is not used in the above example since a search-path is only an expanded version of the theoretical concept, functional-object. However with other objects such a.~ directory, the fact that directory is an expanded version of a file {a directory is a file which is ,sed to store other files) is actually used in the definition.The third type of query involves failed preconditions of plans or missing steps in a plan. In UC the preconditions of a plan are listed in a preeonds frame. For instance, in example 3 above, the relevant preconds frame is:(preconds (plan (mtrans (actor *user*) (object (command (name rmdir) (args (?director/name)) (input stdin) (output stdout} (diagnostic s~dout))) (from *user*) (to ,Unix*))) (are ((state (actor (all (var ?file) (desc (file)) (pred (inside-of (object ?directoryname))))}) (state-name physical-state) (value non-existing}) ..This states that one of the preconditions for removing a directory is that it must be empty. In analyzing the example, UC first finds the goal of the user, namely to delete the directory Trap. Then from this goal, UC looks for a plan for that goal among planfors which have that goal in their result slots. This plan is shown above.Once the plan has been found, the preconds for that plan are checked which in this case leads to the fact that a directory must be empty before it can be deleted. Here UC actually checks with UNIX, looking in the user's area for the directory Trap and discovers that this precondition is indeed violated. If UC had not been able to find the directory, UC would suggest that the user personally check for the preconditions. Of course if the first precondition was found to be satisfied, the next would be checked and so on. In a multi-step plan, UC would also verify that the steps of the plan had been carried out in the proper sequence by querying the user or checking with UNIX. storage for efficient access: The knowledge structures in UC are stored in PEARL databases which provide efficient access by hash indexing.Frames are indexed by combinations of the frame type and/or the contents of selected slots. For instance, the planfor of example one is indexed using a hashing key based on the state-change in the planfor's result slot. This planfor is stored by the fact that it is a planfor for the state-change of a terminal's mesg state. This degree of detail in the indexing scheme allows this planfor to be immediately recovered whenever a reference is made to a state-change in a terminars mesg state. Similarly, a memory-association is indexed by the filler of the recall-key slot, an out-planfor is indexed using the contents of the query slot of the out-planfor, and a preconds is indexed by the plan in the plan slot of the preconds. Indeed all knowledge structures in UC have associated with them one or more indexing schemes which specify how to generate hashing keys for storage of the knowledge structure in the UC databases. These indexing methods are specified at the time that the knowledge structures are defined. Thus although care must be taken to choose good indexing schemes when defining the structure of a frame, the indexing scheme is used automatically whenever another instance of the frame is ~dded to the UC databases. Also, even though the indexing schemes for large structures like planfors involve many levels of embedded slots and frames, simpler knowledge structures usually have simpler indexing schemes. For example, the representation for users in UC are stored in two ways: by the fact that they are users and have a specific account name, and by the fact that they are users and have some given real name. The basic idea behind using these complex indexing schemes is to simulate a real associative memory by using the hashing mechanisms provided in Pearl databases.This associative memory mechanism fits well with the data-driven control mechanism of UC and is usel'ul for a great variety of tasks. For example, goal analysis of speech acts can be done through this associative mechanism:(memory-association (recall-key (assertion (cd (goal (planner ?person} (objective ?obj )))) (cluster ((out-pianfor (cd ?obi)))))In the above example {provided by Jim Mayfield), UC • analyzes the user's statement of wanting to do something as a request for UC to explain how to achieve that goal. conclusions: The knowledge structures developed for UC have so far shown good efficiency in both access time and space usage within the limited domain of processing queries to a Unix Consultant. The knowledge structures fit well in the framework of data-driven programming used in UC. Ease of use is somewhat subjective, but beginners have been able to add to the UC knowledge base after an introductory graduate course in AI. Efforts underway to extend UC in such areas as dialogue will further test the merit of this representation scheme. technical data: UC is a working system which is still under development. In size, UC is currently two and a half megabytes of which half a megabyte is FRANZ lisp. Since the knowledge base is still growing, it is uncertain how much of an impact even more knowledge will have on the system especially when the program becomes too large to fit in main memory. In terms of efficiency, queries to UC take between two and seven seconds of CPU time on a V.~X 11/780. Currently, all the knowledge in UC is hand coded, however efforts are under way to aatomate the process. acknowledgments: Some of the knowledge structures used in UC are refinements of formats developed by Joe Faletti and Peter Norvig. Yigal A.rens is responsible for the underlying memory structure used in UC and of course, this project would not be possible without the guidance and advice of Robert Wilensky. Appendix:
null
null
null
null
{ "paperhash": [ "wilensky|talking_to_unix_in_english:_an_overview_of_uc", "jacobs|generation_in_a_natural_language_interface", "faletti|pandora:_a_program_for_doing_commonsense_planning_in_complex_situations", "schmolze|proceedings_of_the_1981_kl-one_workshop,", "wilensky|a_knowledge-based_approach_to_language_processing:_a_progress_report", "wilensky|phran_-_a_knowledge-based_natural_language_understander" ], "title": [ "Talking to UNIX in English: an overview of UC", "Generation in a Natural Language Interface", "PANDORA: A Program for Doing Commonsense Planning in Complex Situations", "Proceedings of the 1981 KL-ONE Workshop,", "A Knowledge-Based Approach to Language Processing: A Progress Report", "PHRAN - A Knowledge-Based Natural Language Understander" ], "abstract": [ "UC is a natural language help facility which advises users in using the UNIX operating system. Users can query UC about how to do things, command names and formats, online definitions of UNIX or general operating systems terminology, and debugging problems in using commands. UC is comprised of the following components: a language analyzer and generator, a context and memory model, an experimental common-sense planner, highly extensible knowledge bases on both the UNIX domain and the English language, a goal analysis component, and a system for acquisition of new knowledge through instruction in English. The language interface of UC is based on a “phrasal analysis” approach which integrates semantic, grammatical and other types of information. In addition, it includes capabilities for ellipsis resolution and reference disambiguation.", "The PHRED (PHR asal English Diction) generator produces the natural language output of Berkeley's UNIX Consultant system (UC). The generator shares its knowledge base with the language analyzer PHRAN (PHRasal ANalyser). The parser and generator, together a component of UC's user interface, draw from a database of pattern-concept pairs where the basic unit of the linguistic patterns is the phrase. Both are designed to provide multilingual capabilities, to facilitate linguistic paraphrases, and to be adaptable to the individual user's vocabulary and knowledge. The generator affords extensibility,simplicity, and processing speed while performing the task of producing natural language utterances from conceptual representations using a large knowledge base. This paper describes the implementation of the phrasal generator and discusses the role of generation in a user-friendly natural language interface.", "A planning program named PANDORA (Plan ANalyzer with Dynamic Organization, Revision, and Application) has been developed which creates plans in the common-sense domains of everyday situations and of a Unix** Consultant using hierarchical planning and metaplanning. PANDORA detects its own goals in an event-driven fashion, dynamically interleaving the creation, execution and revision of its plans.", "Abstract : The Second KL-ONE Workshop gathered researchers from twenty-one universities and research institutions for a series of discussions and presentations about the KL-ONE knowledge representation language. These proceedings summarize the discussions and presentations, provide position papers from the participants, list the agendas of the Workshop along with the names and addresses of the participants, and include a description of the KL-ONE language plus an index of some KL-ONE technical terms. (Author)", "We present a model of natural language use meant to encompass the language-specific aspects of understanding and production. The model is motivated by the pervasiveness of nongenerative language, by the desirability of a language analyzer ana a language production mechanism to share their knowledge, and the advantages of knowledge engineering features such as ease of extention and modification. \n \nThis model has been used as the basis for PHRAN, a language analyzer, and PHRED, a language production mechanism. We have implemented both these systems using a common knowledge base; we have produced versions of PHRAN that understand Spanish and Chinese with only changing the knowledge base and not modifying the program; and we have implemented PHRAN using the query language of a conventional relational data base system, and compared the performance of this system to a conventional LISP implementation.", "We have developed an approach to natural language processing in which the natural language processor is viewed as a knowledge-based system whose knowledge is about the meanings of the utterances of its language. The approach is oriented around the phrase rather than the word as the basic unit. We believe that this paradigm for language processing not only extends the capabilities of other natural language systems, but handles those tasks that previous systems could perform in a more systematic and extensible manner.We have constructed a natural language analysis program called PHRAN (PHRasal ANalyzer) based in this approach. This model has a number of advantages over existing systems, including the ability to understand a wider variety of language utterances, increased processing speed in some cases, a clear separation of control structure from data structure, a knowledge base that could be shared by a language production mechanism, greater ease of extensibility, and the ability to store some useful forms of knowledge that cannot readily be added to other systems." ], "authors": [ { "name": [ "R. Wilensky", "Yigal Arens", "David N. Chin" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "P. Jacobs" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Joseph Faletti" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "James G. Schmolze", "R. Brachman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Wilensky" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Wilensky", "Y. Arens" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null ], "s2_corpus_id": [ "9333371", "956495", "39277837", "60933710", "9520559", "16980721" ], "intents": [ [ "background", "methodology" ], [ "methodology" ], [ "methodology" ], [ "background" ], [], [] ], "isInfluential": [ false, false, true, false, false, false ] }
Problem: The paper aims to investigate whether the knowledge structures implemented in UC, the UNLX Consultant, are sufficient to enable UC to respond to a wide range of user queries in the domain of the UNIX operating system. Solution: The paper proposes that by utilizing specific knowledge structures in parsing, reference, planning, goal detection, and generation tasks, UC can efficiently access and process information from a large expert system database, allowing for interactive conversations with users in the UNIX domain.
500
0.018
null
null
null
null
null
null
null
null
98c8a6d5fd98e37acaf5a564b905f54ad7a646dc
6504336
null
An Overview of the {N}igel Text Generation Grammar
Research on the text generation task has led to creation of a large systemic grammar of English, Nigel, which is embedded in a computer program. The grammar and the systemic framework have been extended by addition of a semantic stratum. The grammar generates sentences and other units under several kinds of experimental control. This paper describes augmentations of various precedents in the systemic framework. The emphasis is on developments which control the text to fulfill a purpose, and on characteristics which make Nigel relatively easy to embed in a larger experimental program.
{ "name": [ "Mann, William C." ], "affiliation": [ null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
17
50
null
Among the various uses for grammars, text generation at first seems to be relatively new. The organizing goal of text generation, as a research task, is to describe how texts can be created in fulfillment of text needs. 2 Such a description must relate texts to needs, and so must contain a functional account of the use and nature of language, a very old goal. Computational text generation research should be seen as simply a particular way to pursue that goal.As part of a text generation research project, a grammar of English has been created and embodied in a computer program. This grammar and program, called Nigel, is intended as a component of a larger program called Penman. This paper introduces Nigel, with just enough detail about Penman to show Nigel's potential use in a text generation system. 2A text need is the earliest recognition on the part of the speaker that the =mmeciiate situation is orle in which he would like to produce speech. In this report we will alternate freely between the terms speaker, writer and author, between hearer and reader, and between speech and text This is s=mpty partial accommodation of preva=ling jargon; no differences are intended.Text generation seeks to characterize the use of natural languages by developing processes (computer programs) which can create appropriate, fluent text on demand. A representative research goal would oe to create a program which could write a text that serves as a commentary on a game transcript, making the eventsof the game understandable. 3The guiding aims in the ongoing des=gn of the Penman text generation program are as follows:1. To learn, in a more specific way than has prewously been achieved, how appropriate text can be created in response to text needs.2. To identify the dominant characteristics which make a text appropriate for meeting its need.3. To develop a demonstral~le capacity to create texts which meet some identifiable practical class of text needs.Seeking to fill these goals, several different grammatical frameworks were considered.The systemic framework was chosen, and it has proven to be an entirely agreeable choice. Although it is relatively unfamiliar to many American researchers. it has a long history of use in work on concerns which are central tO text generation. It was used by Winograd in the SHRDLU system, and more extensively by others since [Winograd 72. Davey 79, McKeown 82. McDonald 80] . A recent state of the art survey identifies the systemic framework as one of a small number of linguistic frameworks which are likely to be the basis for significant text generation programs in th~s decade {Mann 82a}.One of the principal advantages of the systemic framework iS its strong emphasis on "functional" explanations of grammatical phenomena. Each distinct kind of grammatical entity iS associated with an expression of what it does for the speaker. so that the grammar indicates not only what is possible but why it would be used. Another is its emphasis on principled, iustified descriptions of the choices which the grammar offers, i.e. all of its optionality. Both of these emphases support text generation programming significantly.For these and other reasons the systemic framework waS Chosen for Nigel. The creation of the Nigel program has required evolutionary rather than radical revisions in systemic notation, largely in the direction of making well-precedented ideas more explicit or detailed. Systemic notation deals principally with three kinds of entities: 1} systems, 2) realizations of systemic choices (including function structures), and 3) lexical items. These three account for most of the notational devices, and the Nigel program has separate parts for each.4This work would not have been possible wtthout the active palliclpatlon of Christian MattNessen, and the participation and past contributions of Michael Halliday and other system=c=sts.Comparing the systemic functional approach to a structural approach such as context-free grammar, ATNs or transformational grammar, the differences in style (and their effects on the programmed result) are profound. Although it is not possible to compare the approaches in depth here, we note several differences of interest to people more familiar with structural approaches:• Systems, which are most like structural rules, do not specify the order of constituents. Instead they are used to specify sets of features to be possessed by the grammatical construction as a whole.2. The grammar typically pursues several independent lines of reasoning (or specification) whose results are then combined. This is particularly difficult to do in a structurally oriented grammar, which ordinarily expresses the state of development of a unit in terms of categories of constituents.3. In the systemic framework, all variability of the structure of the result, and hence all grammatical control, is in one kind of construct, the system. In other frameworks there is often variability from several sources: optional rules, disjunctive options within rules, optional constituents, order of application and so forth. For generation these would have to be coordinated by methods which lie outside of the grammar, but in the systemic grammar the coordination problem does not exist.Each system contains a set of alternatives• symbols called grammatical features. When a system is entered, exactly one of its grammatical features must be chosen. Each system also has an input expression, which encodes the conditions under which the system is entered 5 Outing the generation, the Dr0gram keeps track of the selection expression, the set of features which have been chosen up to that point. Based on the selection expression.the program invokes the realization operations which are associated with each feature chosen.In addition to the systems there are Gates. A gate can be thought of as an input expression which activates a particular grammatical feature, without choice. 6 These grammatical features are used just as those chosen in systems. Gates are most often used to perform realization in response to a collection of features. 7 7Bach realization ot~erat=on is associated with just one feature, there are no realizat¢on operations which depend on more than one feature, and no rules corresponding to Hudson's function reah;'ation rules. The gates facihtate elimiqating this category of rules, with a net effect that the notation is more homogeneous.There are three groups of realization operators: those that build structure (in terms of grammatical functions), those that constrain order, and those that associate features with grammatical functions. Partition constrains one function (hence one fundle) to be realized to the left of another, but does not constrain them to be adjacent. Order constrains just as Partition does, and in addition constrains the two tO be realized adjacently. OrderAtFront constrains a function to be realized as the leftmost among the daughters of its mother, and OrderAtEnd symmetrically as rightmost. Of these, only Partition is new to the systemic framework.The lexicon is defined as a set of arbitrary symbols, called word names, such as "budten", associated wtth symbols called spellings, the lexical items as they appear in text. In order to keep Nigel simple during its early development, there is no formal provision for morphology or for relations between items which arise from the same root.Each word name has an associated set of lexical features.Lexify selects items by word name; Classify and OutClassify operate on sets of items in terms of the lexicat features.Nigel's grammar is partly based on published sources, and is partly new. It has all been expressed in a single homogeneous notation, with consistent naming conventions and much care to avoid reusing names where identity is not intended. The grammar is organized as a single network, whose one entry point is used for generating every kind of unit. 8Nigers lexicon is designed for test purposes rather than for coverage of any particular generation task. It currently recogmzes 130 texical features, and it has about 2000 texical items in about 580 distinct categories (combinations of features).The most novel part of Nigel is the semantics of :Re grammar. One of the goals identified above was to "s~ecify '~ow the grammar can be regulated effectively by the prevailing text need." Just as the grammar and the resuiting text are ooth very, complex, so is the text need. In fact. grammar and text complexity actually reflect the prior complexity of the text nee~ ',vh~c~ ~ave rise to the text. The grammar must respond selectwely to those elements of the need which are represente~ by the omt Demg generated at the moment.Except for lexical choice, all variability in Nigers generated result comes from variability of choice in the grammar. Generating an appropriate s[ructure consists entirely in making the choices in each system appropriately. The semantics of the grammar must therefore be a semantics of cno~ces in the individual systems; the choices must be made in each system according to the appropriate elements of the prevailing need.In Nigel this semantic control is localized ',o the systems themselves. For each system, a procedure is defined ,.vh~ch can declare the appropriate choice in the system. When the system is entered, the procedure is followed to discover the appropriate choice. Such a procedure is called a chooser (or "choice expert".) The chooser is the semantic account of the system, me description of the circumstances under wnpch each choice is approoriate.To specify the semantics of the choices, we needed a notation for the choosers as procedures. This paper describes that notation briefly and informally. Its use is exemplified in the Nigel demonstration [Mann C: x3j and developed in more detail ~n another report [Mann 82b ].To gain access to the details of the need. the choosers must in some sense ask questions about particular entities. For example, to decide between the grammatical features Singular and Plural in creating a NominalGroup. the Number chooser (the 8At the end of 1982. N,gel contained about 220 systems, with all ot the necessary realizations speclfiecL tt ts thus the largest systemic grammar in a single notation, and possibly the largest grammar of a natural language in any of the functional linguJstic traditions. Nigel ~S ~rogrammed in INTEF:tLISP chooser for the Number system, where these features are the options) must be able to ask whether a particular entity (already identified elsewhere as the entity the NominalGroup represents) is unitary or multiple. That knowledge resides outside of Niget, in theThe environment is regarded informally as being composed of three disjoint regions:1. The Knowledge Base, consisting of information which existed prior to the text need;2. The Text Plan, consisting of information which was created in response to the text need, but before the grammar was entered;3. The Text Services, consisting of information which is available on demand, without anticipation.Choosers must have access to a stock of symbols representing entities in the environment. Such symbols are called hubs. In the cOurse of generation, hubs are associated with grammatical functions; the associations are kept in a Function Association Table, which is used to reaccess information in the environment. For example, in choosing pronouns the choosers will ask Questions about the multiplicity of an entity which is associated with the THING function in the Function Associat=on Table. Later they may ask about the gender of the same entity. again accessing it through its association with THING. This use of grammatical functions is an extension of prewous uses. Consequently, relations between referring phrases and the concepts being referred to are captured in the Function Association Table. For example, the function representing the NominalGroup as a whole is associated with the hub whictl represents the thing being referred to in the environment. Similarly for possessive determiners, the grammatical function for the determiner is associated with the hub for the possessor.It is convenient to define choosers in such a way that they have the form of a tree. For any particular case, a single path of operations is traversed. Choosers are defined principally in terms of the following Operations:1. Ask presents an inquiry to the environment. The inquiry has a fixed predetermined set of possible responses, each corresponding to a branch of the path in the chooser, 2. Identify ~resents an inquiry to the environment. The set of responses is open-ended. The response is put in the Function Association Table. associated with a grammatical function which is given (in addition to the inquiry) as a parameter tO the Identify operator. 93. Choose declares a choice, 4. CopyHub transfers an association of a hub from one grammatical function tO another. 1°9See the demonstration paper in [Mann 8 ,3} for an explanation and example of its use 10There are three athers whtCh have some linguistic slgnihcance: Pledge, TermPle~:lge, and Cho~ceError. These are necessary but do not Play a central rote, They are named here lust to indicate that the chooser notation ~s very s=m~le.Choosers obtain information about the immediate circumstances in which they are generating by presenting inquiries to the environment. Presenting inquiries, and receiving replies constitute the only way in which the grammar and its environment interact.An inquiry consists of an inquiry operator and a sequence of inquiry parameters. Each inquiry parameter is a grammatical function, and it represents (via the Function Association Table) the entities in the environment which the grammar is inquiring about. The operators are defined in such a way that they have both formal and informal modes of expression. Informally. each inquiry is a predefined question, in English, which represents the issue that the inquiry is intended to resolve for any chooser that uses it. Formally. the inquiry shows how systemic choices depend on facts about particular grammatical functions, and in particular restricts the account of a particular choice to be responsive to a well-constrained, well-identified collection of facts. Both the informal English form of the inquiry and the corresponding formal expression are regarded as parts of the semantic theory expressed by the choosers which use the inquiry. The entire collection of inquiries for a grammar ~s a definition of the semantic scope to which the grammar is responsive at its [evet of delicacy. Notice that in the ProcessType chooser, although there are only four possible choices, there are five paths through the chooser from the starting point at the too, because Mental processes can be identified in two different ways: those which represent states of affairs and those which do not. The number of termination points of a chooser often exceeds the number of choices available. Table 1 shows the English forms of the Questions being asked in the ProceasType chooser. (A word ~n all cap.tats names a grammatical function which is a oarameter of the inquiry,) MentalProoessQ Is PROCESS a process of comprehension. recognition, belief, perception, deduction, remembering, evaluation or mental reaction?The sequence of incluiries which the choosers present to the environment, together with its responses, creates a dialogue. The unit generated can thus be seen as being formed out of a negotiation between the choosers and the environment. This is a particularly instructive way to view the grammar and its semantics, The grammar performs the final steps in the generation process. It must complete the surface form of the text, but there is a great deal of preparation necessary before it is appropriate for the grammar tO start its work. Penman's design calls for many kinds of activities under the umbrella of "text planning" to provide the necessary support. Work on Nigel is proceeding in parallel with other work intended to create text planning processes.
They are Preselect, which associates a grammatical feature with a function (and hence with its fundle);Classify, which associates a lexical feature with a function: OutClassify, which associates a lexical feature with a function in a preventive way; and Lexify, which forces a particular lexical item to be used to realize a function. Of these, OutClassify and Lexi~ are new, taking up roles previously filled by Classify. OutClaasify restricts the realization of a function (and hence fundle) to be a lexical item which does not bear the named feature. This is useful for controlling items in exception categories (e.g. reflexives) in a localized, manageable way. Lexify allows the grammar to force selection of a particular item without having a special lexical feature for that purpose.In addition to these realization operators, there =s a set of Default Function Order Lists. These are lists of functions which will be ordered in particular ways by Nigel. provided that the functions on the lists occur in the structure, and that the realization operators have not already ordered those functions. A large proportion of the constraint of order is performed through the use of these lists.The realization operations of the systemic frameworK, especially those having to do with order, have not been specified so explicitly before.Nigel does not presume that any particular form Of knowledge representation prevails in the environment. The conceptual content of the environment is represented in the Function Association Table only by single, arbitrary, undecomposable symbols, received from the environment; the interface is designed so that environmentally structured responses do not occur. There is thus no way for Nigel to tell whether the environment's representation is, for example, a form of predicate calculus or a frame-based notation.Instead, the environment must be able to respond to incluiries, which requires that the inquiry operators be ~mplemented.It must be able to answer inquiries about multiplicity, gender, time, and so forth, by whatever means are appropriate to the actual environment.AS a result, Nigel is largely independent of the environment's notation. It does not need to know how to search, and so it is insulated from changes .in representation. We expect that Nigel will be transferable from one application to another with relatively little change, and will not embody covert knowledge about particular representation techniques.
null
This section provides a set of samples of Niget's syntactic diversity: aJl of the sentence and clause structures in the Abstract of this paper are within Nigers syntactic scope.Following a frequent practice in systemic linguistics (introduced by Halliday), the grammar provides for three relatively independent kinds of specification of each syntactic unit: the Ideational or logical content, the Interpersonal content (attitudes and relations between the speaker and the unit generated) and the Textual content. Provisions for textual control are well elaborated, and so contribute significantly to Nigel's ability to control the flow of the reader's attention and fit sentences into larger un=ts of text.
The activity of defining Nigel, especially its semantic parts. is productive in its own right, since it creates interesting descriotions and proposals about the nature of English and ti~e meaning of syntactic alternatives, as well as new notaticnal devices, t~ But given Niget as a program, contaimng a full complement of choosers, inquiry operators and related entities, new possibilities for investigation also arise.Nigel provides the first substantial opportunity to test systemic grammars to find out whether they produce unintended combinations of functions, structures or uses of lex~cal items. Similarly, it can test for contradictions. Again. Nigel provides the first substantial opportunity for such a test. And such a test is necessary, since there appears to be a natural tendency to write grammars with excessive homogeneity, not allowing for possible exception cases. A systemic functional account can also be tested in Niget by attempting to replicate part=cular natural texts--a very revealing kind of experimentation. Since Nigel provides a consistent notation and has been tested extensively, it also has some advantages for educational and linguistic research uses.On another scale, the whole project can be regarded as a single experiment, a test of the functionalism of the systemic framework, and of its identification of the functions of English.In artificial intelligence, there is a need for priorities and guidance in the design of new knowledge representation notations. The inquiry operators of Nigel are a particularly interesting proposal as a set of distinctions already embodied in a mature, evolved knowledge notation, English, and encodable in other knowledge notations as well. To take just a few examples among many, the inquiry operators suggest that a notation for knowledge should be able to represent objects and actions, and should be able to distinguish between definite existence, hypothetical existence, conjectural existence and non.existence of actions, These are presently rather high expectations for artificial intelligence knowledge representations.As part of an effort to define a text generation process, a programmed systemic grammar called Nigel has been created. Systemic notation, a grammar of English, a semantic notation which extends systemic notation, and a semantics for English are all included as distinct parts of Nigel. When Nigel has been completed it will be useful as a research tool in artificial intelligence and linguistics, and as a component in systems which generate text.111t tS our intention eventually to make Nigel avaJlal~le for teaching, research, development and computational application
Main paper: .1 systems and gates: Each system contains a set of alternatives• symbols called grammatical features. When a system is entered, exactly one of its grammatical features must be chosen. Each system also has an input expression, which encodes the conditions under which the system is entered 5 Outing the generation, the Dr0gram keeps track of the selection expression, the set of features which have been chosen up to that point. Based on the selection expression.the program invokes the realization operations which are associated with each feature chosen.In addition to the systems there are Gates. A gate can be thought of as an input expression which activates a particular grammatical feature, without choice. 6 These grammatical features are used just as those chosen in systems. Gates are most often used to perform realization in response to a collection of features. 7 7Bach realization ot~erat=on is associated with just one feature, there are no realizat¢on operations which depend on more than one feature, and no rules corresponding to Hudson's function reah;'ation rules. The gates facihtate elimiqating this category of rules, with a net effect that the notation is more homogeneous.There are three groups of realization operators: those that build structure (in terms of grammatical functions), those that constrain order, and those that associate features with grammatical functions. Partition constrains one function (hence one fundle) to be realized to the left of another, but does not constrain them to be adjacent. Order constrains just as Partition does, and in addition constrains the two tO be realized adjacently. OrderAtFront constrains a function to be realized as the leftmost among the daughters of its mother, and OrderAtEnd symmetrically as rightmost. Of these, only Partition is new to the systemic framework.The lexicon is defined as a set of arbitrary symbols, called word names, such as "budten", associated wtth symbols called spellings, the lexical items as they appear in text. In order to keep Nigel simple during its early development, there is no formal provision for morphology or for relations between items which arise from the same root.Each word name has an associated set of lexical features.Lexify selects items by word name; Classify and OutClassify operate on sets of items in terms of the lexicat features.Nigel's grammar is partly based on published sources, and is partly new. It has all been expressed in a single homogeneous notation, with consistent naming conventions and much care to avoid reusing names where identity is not intended. The grammar is organized as a single network, whose one entry point is used for generating every kind of unit. 8Nigers lexicon is designed for test purposes rather than for coverage of any particular generation task. It currently recogmzes 130 texical features, and it has about 2000 texical items in about 580 distinct categories (combinations of features).The most novel part of Nigel is the semantics of :Re grammar. One of the goals identified above was to "s~ecify '~ow the grammar can be regulated effectively by the prevailing text need." Just as the grammar and the resuiting text are ooth very, complex, so is the text need. In fact. grammar and text complexity actually reflect the prior complexity of the text nee~ ',vh~c~ ~ave rise to the text. The grammar must respond selectwely to those elements of the need which are represente~ by the omt Demg generated at the moment.Except for lexical choice, all variability in Nigers generated result comes from variability of choice in the grammar. Generating an appropriate s[ructure consists entirely in making the choices in each system appropriately. The semantics of the grammar must therefore be a semantics of cno~ces in the individual systems; the choices must be made in each system according to the appropriate elements of the prevailing need.In Nigel this semantic control is localized ',o the systems themselves. For each system, a procedure is defined ,.vh~ch can declare the appropriate choice in the system. When the system is entered, the procedure is followed to discover the appropriate choice. Such a procedure is called a chooser (or "choice expert".) The chooser is the semantic account of the system, me description of the circumstances under wnpch each choice is approoriate.To specify the semantics of the choices, we needed a notation for the choosers as procedures. This paper describes that notation briefly and informally. Its use is exemplified in the Nigel demonstration [Mann C: x3j and developed in more detail ~n another report [Mann 82b ].To gain access to the details of the need. the choosers must in some sense ask questions about particular entities. For example, to decide between the grammatical features Singular and Plural in creating a NominalGroup. the Number chooser (the 8At the end of 1982. N,gel contained about 220 systems, with all ot the necessary realizations speclfiecL tt ts thus the largest systemic grammar in a single notation, and possibly the largest grammar of a natural language in any of the functional linguJstic traditions. Nigel ~S ~rogrammed in INTEF:tLISP chooser for the Number system, where these features are the options) must be able to ask whether a particular entity (already identified elsewhere as the entity the NominalGroup represents) is unitary or multiple. That knowledge resides outside of Niget, in theThe environment is regarded informally as being composed of three disjoint regions:1. The Knowledge Base, consisting of information which existed prior to the text need;2. The Text Plan, consisting of information which was created in response to the text need, but before the grammar was entered;3. The Text Services, consisting of information which is available on demand, without anticipation.Choosers must have access to a stock of symbols representing entities in the environment. Such symbols are called hubs. In the cOurse of generation, hubs are associated with grammatical functions; the associations are kept in a Function Association Table, which is used to reaccess information in the environment. For example, in choosing pronouns the choosers will ask Questions about the multiplicity of an entity which is associated with the THING function in the Function Associat=on Table. Later they may ask about the gender of the same entity. again accessing it through its association with THING. This use of grammatical functions is an extension of prewous uses. Consequently, relations between referring phrases and the concepts being referred to are captured in the Function Association Table. For example, the function representing the NominalGroup as a whole is associated with the hub whictl represents the thing being referred to in the environment. Similarly for possessive determiners, the grammatical function for the determiner is associated with the hub for the possessor.It is convenient to define choosers in such a way that they have the form of a tree. For any particular case, a single path of operations is traversed. Choosers are defined principally in terms of the following Operations:1. Ask presents an inquiry to the environment. The inquiry has a fixed predetermined set of possible responses, each corresponding to a branch of the path in the chooser, 2. Identify ~resents an inquiry to the environment. The set of responses is open-ended. The response is put in the Function Association Table. associated with a grammatical function which is given (in addition to the inquiry) as a parameter tO the Identify operator. 93. Choose declares a choice, 4. CopyHub transfers an association of a hub from one grammatical function tO another. 1°9See the demonstration paper in [Mann 8 ,3} for an explanation and example of its use 10There are three athers whtCh have some linguistic slgnihcance: Pledge, TermPle~:lge, and Cho~ceError. These are necessary but do not Play a central rote, They are named here lust to indicate that the chooser notation ~s very s=m~le.Choosers obtain information about the immediate circumstances in which they are generating by presenting inquiries to the environment. Presenting inquiries, and receiving replies constitute the only way in which the grammar and its environment interact.An inquiry consists of an inquiry operator and a sequence of inquiry parameters. Each inquiry parameter is a grammatical function, and it represents (via the Function Association Table) the entities in the environment which the grammar is inquiring about. The operators are defined in such a way that they have both formal and informal modes of expression. Informally. each inquiry is a predefined question, in English, which represents the issue that the inquiry is intended to resolve for any chooser that uses it. Formally. the inquiry shows how systemic choices depend on facts about particular grammatical functions, and in particular restricts the account of a particular choice to be responsive to a well-constrained, well-identified collection of facts. Both the informal English form of the inquiry and the corresponding formal expression are regarded as parts of the semantic theory expressed by the choosers which use the inquiry. The entire collection of inquiries for a grammar ~s a definition of the semantic scope to which the grammar is responsive at its [evet of delicacy. Notice that in the ProcessType chooser, although there are only four possible choices, there are five paths through the chooser from the starting point at the too, because Mental processes can be identified in two different ways: those which represent states of affairs and those which do not. The number of termination points of a chooser often exceeds the number of choices available. Table 1 shows the English forms of the Questions being asked in the ProceasType chooser. (A word ~n all cap.tats names a grammatical function which is a oarameter of the inquiry,) MentalProoessQ Is PROCESS a process of comprehension. recognition, belief, perception, deduction, remembering, evaluation or mental reaction?The sequence of incluiries which the choosers present to the environment, together with its responses, creates a dialogue. The unit generated can thus be seen as being formed out of a negotiation between the choosers and the environment. This is a particularly instructive way to view the grammar and its semantics, The grammar performs the final steps in the generation process. It must complete the surface form of the text, but there is a great deal of preparation necessary before it is appropriate for the grammar tO start its work. Penman's design calls for many kinds of activities under the umbrella of "text planning" to provide the necessary support. Work on Nigel is proceeding in parallel with other work intended to create text planning processes. some operators associate features with functions.: They are Preselect, which associates a grammatical feature with a function (and hence with its fundle);Classify, which associates a lexical feature with a function: OutClassify, which associates a lexical feature with a function in a preventive way; and Lexify, which forces a particular lexical item to be used to realize a function. Of these, OutClassify and Lexi~ are new, taking up roles previously filled by Classify. OutClaasify restricts the realization of a function (and hence fundle) to be a lexical item which does not bear the named feature. This is useful for controlling items in exception categories (e.g. reflexives) in a localized, manageable way. Lexify allows the grammar to force selection of a particular item without having a special lexical feature for that purpose.In addition to these realization operators, there =s a set of Default Function Order Lists. These are lists of functions which will be ordered in particular ways by Nigel. provided that the functions on the lists occur in the structure, and that the realization operators have not already ordered those functions. A large proportion of the constraint of order is performed through the use of these lists.The realization operations of the systemic frameworK, especially those having to do with order, have not been specified so explicitly before.Nigel does not presume that any particular form Of knowledge representation prevails in the environment. The conceptual content of the environment is represented in the Function Association Table only by single, arbitrary, undecomposable symbols, received from the environment; the interface is designed so that environmentally structured responses do not occur. There is thus no way for Nigel to tell whether the environment's representation is, for example, a form of predicate calculus or a frame-based notation.Instead, the environment must be able to respond to incluiries, which requires that the inquiry operators be ~mplemented.It must be able to answer inquiries about multiplicity, gender, time, and so forth, by whatever means are appropriate to the actual environment.AS a result, Nigel is largely independent of the environment's notation. It does not need to know how to search, and so it is insulated from changes .in representation. We expect that Nigel will be transferable from one application to another with relatively little change, and will not embody covert knowledge about particular representation techniques. nigel's syntactic diversity: This section provides a set of samples of Niget's syntactic diversity: aJl of the sentence and clause structures in the Abstract of this paper are within Nigers syntactic scope.Following a frequent practice in systemic linguistics (introduced by Halliday), the grammar provides for three relatively independent kinds of specification of each syntactic unit: the Ideational or logical content, the Interpersonal content (attitudes and relations between the speaker and the unit generated) and the Textual content. Provisions for textual control are well elaborated, and so contribute significantly to Nigel's ability to control the flow of the reader's attention and fit sentences into larger un=ts of text. uses for nigel: The activity of defining Nigel, especially its semantic parts. is productive in its own right, since it creates interesting descriotions and proposals about the nature of English and ti~e meaning of syntactic alternatives, as well as new notaticnal devices, t~ But given Niget as a program, contaimng a full complement of choosers, inquiry operators and related entities, new possibilities for investigation also arise.Nigel provides the first substantial opportunity to test systemic grammars to find out whether they produce unintended combinations of functions, structures or uses of lex~cal items. Similarly, it can test for contradictions. Again. Nigel provides the first substantial opportunity for such a test. And such a test is necessary, since there appears to be a natural tendency to write grammars with excessive homogeneity, not allowing for possible exception cases. A systemic functional account can also be tested in Niget by attempting to replicate part=cular natural texts--a very revealing kind of experimentation. Since Nigel provides a consistent notation and has been tested extensively, it also has some advantages for educational and linguistic research uses.On another scale, the whole project can be regarded as a single experiment, a test of the functionalism of the systemic framework, and of its identification of the functions of English.In artificial intelligence, there is a need for priorities and guidance in the design of new knowledge representation notations. The inquiry operators of Nigel are a particularly interesting proposal as a set of distinctions already embodied in a mature, evolved knowledge notation, English, and encodable in other knowledge notations as well. To take just a few examples among many, the inquiry operators suggest that a notation for knowledge should be able to represent objects and actions, and should be able to distinguish between definite existence, hypothetical existence, conjectural existence and non.existence of actions, These are presently rather high expectations for artificial intelligence knowledge representations. summary: As part of an effort to define a text generation process, a programmed systemic grammar called Nigel has been created. Systemic notation, a grammar of English, a semantic notation which extends systemic notation, and a semantics for English are all included as distinct parts of Nigel. When Nigel has been completed it will be useful as a research tool in artificial intelligence and linguistics, and as a component in systems which generate text.111t tS our intention eventually to make Nigel avaJlal~le for teaching, research, development and computational application challenge: Among the various uses for grammars, text generation at first seems to be relatively new. The organizing goal of text generation, as a research task, is to describe how texts can be created in fulfillment of text needs. 2 Such a description must relate texts to needs, and so must contain a functional account of the use and nature of language, a very old goal. Computational text generation research should be seen as simply a particular way to pursue that goal.As part of a text generation research project, a grammar of English has been created and embodied in a computer program. This grammar and program, called Nigel, is intended as a component of a larger program called Penman. This paper introduces Nigel, with just enough detail about Penman to show Nigel's potential use in a text generation system. 2A text need is the earliest recognition on the part of the speaker that the =mmeciiate situation is orle in which he would like to produce speech. In this report we will alternate freely between the terms speaker, writer and author, between hearer and reader, and between speech and text This is s=mpty partial accommodation of preva=ling jargon; no differences are intended.Text generation seeks to characterize the use of natural languages by developing processes (computer programs) which can create appropriate, fluent text on demand. A representative research goal would oe to create a program which could write a text that serves as a commentary on a game transcript, making the eventsof the game understandable. 3The guiding aims in the ongoing des=gn of the Penman text generation program are as follows:1. To learn, in a more specific way than has prewously been achieved, how appropriate text can be created in response to text needs.2. To identify the dominant characteristics which make a text appropriate for meeting its need.3. To develop a demonstral~le capacity to create texts which meet some identifiable practical class of text needs.Seeking to fill these goals, several different grammatical frameworks were considered.The systemic framework was chosen, and it has proven to be an entirely agreeable choice. Although it is relatively unfamiliar to many American researchers. it has a long history of use in work on concerns which are central tO text generation. It was used by Winograd in the SHRDLU system, and more extensively by others since [Winograd 72. Davey 79, McKeown 82. McDonald 80] . A recent state of the art survey identifies the systemic framework as one of a small number of linguistic frameworks which are likely to be the basis for significant text generation programs in th~s decade {Mann 82a}.One of the principal advantages of the systemic framework iS its strong emphasis on "functional" explanations of grammatical phenomena. Each distinct kind of grammatical entity iS associated with an expression of what it does for the speaker. so that the grammar indicates not only what is possible but why it would be used. Another is its emphasis on principled, iustified descriptions of the choices which the grammar offers, i.e. all of its optionality. Both of these emphases support text generation programming significantly.For these and other reasons the systemic framework waS Chosen for Nigel. The creation of the Nigel program has required evolutionary rather than radical revisions in systemic notation, largely in the direction of making well-precedented ideas more explicit or detailed. Systemic notation deals principally with three kinds of entities: 1} systems, 2) realizations of systemic choices (including function structures), and 3) lexical items. These three account for most of the notational devices, and the Nigel program has separate parts for each.4This work would not have been possible wtthout the active palliclpatlon of Christian MattNessen, and the participation and past contributions of Michael Halliday and other system=c=sts.Comparing the systemic functional approach to a structural approach such as context-free grammar, ATNs or transformational grammar, the differences in style (and their effects on the programmed result) are profound. Although it is not possible to compare the approaches in depth here, we note several differences of interest to people more familiar with structural approaches:• Systems, which are most like structural rules, do not specify the order of constituents. Instead they are used to specify sets of features to be possessed by the grammatical construction as a whole.2. The grammar typically pursues several independent lines of reasoning (or specification) whose results are then combined. This is particularly difficult to do in a structurally oriented grammar, which ordinarily expresses the state of development of a unit in terms of categories of constituents.3. In the systemic framework, all variability of the structure of the result, and hence all grammatical control, is in one kind of construct, the system. In other frameworks there is often variability from several sources: optional rules, disjunctive options within rules, optional constituents, order of application and so forth. For generation these would have to be coordinated by methods which lie outside of the grammar, but in the systemic grammar the coordination problem does not exist. Appendix:
null
null
null
null
{ "paperhash": [ "mann|the_anatomy_of_a_systemic_choice", "sutcliffe|oxford_university_press", "mckeown|generating_natural_language_text_in_response_to_questions_about_database_structure", "mcdonald|natural_language_production_as_a_process_of_decision-making_under_constraints", "halliday|cohesion_in_english", "hudson|arguments_for_a_non-transformational_grammar" ], "title": [ "The Anatomy of a Systemic Choice", "Oxford University Press", "Generating natural language text in response to questions about database structure", "Natural language production as a process of decision-making under constraints", "Cohesion in English", "Arguments for a Non-Transformational Grammar" ], "abstract": [ "This paper presents a framework for expressing how choices are made in systemic grammars. Formalizing the description of choice processes enriches descriptions of the syntax and semantics of languages, and it contributes to constructive models of language use. There are applications in education and computation. The framework represents the grammar as a combination of systemic syntactic description and explicit choice processes, called “choice experts.” Choice experts communicate across the boundary of the grammar to its environment, exploring an external intention to communicate. The environment's answers lead to choices and thereby to creation of sentences and other units, tending to satisfy the intention to communicate. The experts’ communicative framework includes an extension to the systemic notion of a function, in the direction of a more explicit semantics. Choice expert processes are presented in two notations, one informal and the other formal. The informal notation yields a grammar‐guided conver...", "An examination of the cult of Sainte Genevieve, the patron saint of Paris. Using hagiographical and liturgical documents, as well as municipal, ecclesiastical and notarial records, it analyzes the religious, social and political contexts of public devotion in the early modern city. main line of argument here has to do with how scholastics perceived humanists to be undermining religious authority, whether by rhetorical frills not in keeping with the plain speech of a humble Christian or by proposing emendations to the Latin Vulgate. These are themes that permit Rummel to draw on her unparalleled grasp of the controversies between Erasmus and his Catholic critics and her sensitivity to the genres of classical rhetoric, including epideictic literature. This study should now be the first work consulted on the subject.\" breadth of is striking. Earlier studies tended to focus on either the 'Renaissance' or 'Reformation' phases of the debate; hers works across this artificial divide with masterful ease and to good purpose.. .[An] elegant and lucid", "There are two major aspects of computer-based text generation: (1) determining the content and textual shape of what is to be said; and (2) transforming that message into natural language. Emphasis in this research has been on a computational solution to the questions of what to say and how to organize it effectively. A generation method was developed and implemented in a system called TEXT that uses principles of discourse structure, discourse coherency, and relevancy criterion. \nThe main features of the generation method developed for the TEXT strategic component include (1) selection of relevant information for the answer, (2) the pairing of rhetorical techniques for communication (such as analogy) with discourse purposes (for example, providing definitions) and (3) a focusing mechanism. Rhetorical techniques, which encode aspects of discourse structure, are used to guide the selection of propositions from a relevant knowledge pool. The focusing mechanism aids in the organization of the message by constraining the selection of information to be talked about next to that which ties in with the previous discourse in an appropriate way. \nThis work on generation has been done within the framework of a natural language interface to a database system. The implemented system generates responses of paragraph length to questions about database structure. Three classes of questions have been considered: questions about information available in the database, requests for definitions, and questions about the differences between database entities. \nThe main theoretical results of this research have been on the effect of discourse structure and focus constraints on the generation process. A computational treatment of rhetorical devices has been developed which is used to guide the generation process. Previous work on focus of attention has been extended for the task of generation to provide constraints on what to say next. The use of these two interacting mechanisms constitutes a departure from earlier generation systems. The approach taken in this research is that the generation process should not simply trace the knowledge representation to produce text. Instead, communicative strategies people are familiar with are used to effectively convey information. This means that the same information may be described in different ways on different occasions.", "1,102,701. Locating conductors. TATEISI ELECTRONICS CO. June 16, 1965 [June 24, 1964], No. 25467/65. Heading G1N. To compensate for the effect of supply voltage fluctuations on an electromagnetic detector, the output amplifier has a D. C. reference voltage varying with the mains supply. As shown the sensing head comprises a primary 2 and opposed secondaries 3, 4 (for detecting a conductor 9). The output is applied through an amplifier 14 to a common emitter trigger comprising transistors 17, 18 to operate a switching circuit 23. The switching circuit and transistors are energized from constant voltage supplies, but the emitter \"reference\" bias is derived from the current through a resistor 28 in series with a Zener diode 29 and hence varies with the A. C. supply to the sensing head.", "Cohesion in English is concerned with a relatively neglected part of the linguistic system: its resources for text construction, the range of meanings that are speciffically associated with relating what is being spoken or written to its semantic environment. A principal component of these resources is 'cohesion'. This book studies the cohesion that arises from semantic relations between sentences. Reference from one to the other, repetition of word meanings, the conjunctive force of but, so, then and the like are considered. Further, it describes a method for analysing and coding sentences, which is applied to specimen texts.", "For the past decade, the dominant transformational theory of syntax has produced the most interesting insights into syntactic properties. Over the same period another theory, systemic grammar, has been developed very quietly as an alternative to the transformational model. In this work Richard A. Hudson outlines \"daughter-dependency theory,\" which is derived from systemic grammar, and offers empirical reasons for preferring it to any version of transformational grammar. The goal of daughter-dependency theory is the same as that of Chomskyan transformational grammar to generate syntactic structures for all (and only) syntactically well-formed sentences that would relate to both the phonological and the semantic structures of the sentences. However, unlike transformational grammars, those based on daughter-dependency theory generate a single syntactic structure for each sentence. This structure incorporates all the kinds of information that are spread, in a transformational grammar, over to a series of structures (deep, surface, and intermediate). Instead of the combination of phrase-structure rules and transformations found in transformational grammars, daughter-dependency grammars contain rules with the following functions: classification, dependency-marking, or ordering. Hudson's strong arguments for a non-transformational grammar stress the capacity of daughter-dependency theory to reflect the facts of language structure and to capture generalizations that transformational models miss. An important attraction of Hudson's theory is that the syntax is more concrete, with no abstract underlying elements. In the appendixes, the author outlines a partial grammar for English and a small lexicon and distinguishes his theory from standard dependency theory. Hudson's provocative thesis is supported by his thorough knowledge of transformational grammar.\"" ], "authors": [ { "name": [ "W. Mann" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "P. Sutcliffe" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "K. McKeown" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "David D. McDonald" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. Halliday", "R. Hasan" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Hudson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null ], "s2_corpus_id": [ "9972666", "156521136", "62743223", "45464479", "62192469", "62158973" ], "intents": [ [], [], [], [], [], [] ], "isInfluential": [ false, false, false, false, false, false ] }
Problem: The paper addresses the development of a text generation program called Nigel within the systemic framework, aiming to control text generation to fulfill a specific purpose. Solution: The hypothesis of the paper is that by utilizing the systemic framework and semantic stratum, the Nigel program can effectively generate English text to meet specific text needs, demonstrating the potential for text generation systems.
500
0.1
null
null
null
null
null
null
null
null
f786bd12cc8b0717a1904f1c188fe5d0fa3bd00c
776531
null
Parsing as Deduction
By exploring the relationship between parsing and deduction, a new and more general view of chart parsing is obtained, which encompasses parsing for grammar formalisms based on unification, and is the basis of the Earley Deduction proof procedure for definite clauses. The efficiency of this approach for an interesting class of grammars is discussed. IThis work wa~ partially supported by the Defense Advanced Research Projects Agency under Contract N00039-80-C-0575 with the Naval Electronic Systems Command. The views and conclusions contained in this article are those of the authors and should not be interpreted as representative of the official policies, either expressed or imp{led, of the Defense Advanced Research Projects Agency or the United Slates Government. £37 2. Basic Notions 2.1. Definite Clauses A definite clause has the form P:Q~&... &Q.. to be read as "P is true if Q1 and ... and Qa are true". If n --~ 0, the clause is a unit clause and is written simply as P. P and QI ..... Qn are literals. P is the positive literal or head of the clause; Ql .... , Qn are the negative literals, forming the body of the clause. Literals have the forn~ pit I ..... tk), where p is the predicate of arity k and the t i the arguments. The arguments are terms. A term may be: a variable {variable names start with capital letters); a constant; a compound term J~tl,...,t m) where f is a functor of arit$ m and the t i are terms. All the variables in a clause are implicitly universally quantified. A set of definite clauses forms a program, and the clauses in a program are called input clauses. A program defines the relations denoted by the predicates appearing in the heads of clauses. When using a definiteclause proof procedure, such as Prolog (Roussel. 1975), a goal statement requests the proof procedure to find provable instances of P.
{ "name": [ "Pereira, Fernando C. N. and", "Warren, David H. D." ], "affiliation": [ null, null ] }
null
null
21st Annual Meeting of the Association for Computational Linguistics
1983-06-01
21
365
null
The aim of this paper is to explore the relationship between parsing and deduction. The basic notion, which goes back to Kowaiski (Kowalski, 1980} and Colmerauer {Colmeraucr, 1978) , h'zs seen a very efficient, if limited, realization in tile use of the logic programming language Prolog for parsing {Colmerauer, 1978; Pereira and Warren, 1980) . The connection between parsing and deduction was developed further in the design of the Eariey Deduction proof procedure (Warren, 1975) , which will also be discussed at length here.• A theoretically clean mechanism to connect parsing with the inference needed for semantic interpretation.llandling of gaps and unbounded dependencies "on the fly" without adding special mechanisms to the parser.:\ reinterprecation and generalization of chart parsing that abstracts from unessential datastructure details.• Elucidation of parsing complexity issues for related formalisms, in particular lexieal-functional grammar (LFG).Our study of these topics is still far from complete; therefore, besides offering some initial results, we shall discuss various outstanding questions.The connection between parsing and deduction is based on the axiomatization of context-free grammars in definite clauses, a particularly simple subset of firstorder logic (Kowalski, 1080; van Emden and Kowalski, 1976) . This axiomatization allows us to identify contextfree parsing algorithms with proof procedures for a restricted class of definite clauses, those derived from context-free rules. This identification can then be generalized to inc{ude larger classes of definite clauses to which the same algorithms can be applied, with simple modifications. Those larger classes of definite clauses can be seen as grammar formalisms in which the atomic grammar symbols of context-free grammars have been replaced by complex symbols that are matched by unification (Robinson, 1965; Colmerauer, 1978; Pereir3 and Warren, 1980} . The simplest of these formalisms is definite-clause grammars (DCG) (Pereira and Warren, 1980) . There is a close relationship between DCGs ~nd other ~,rammar formalisms based on unification, such as Unification Grammar {UG) (Kay, 1070) , LFG, PATR-2 {Shieber. 1083) and the more recent versions of GPSG (Gazdar and Pullum, 1082) .The parsing a{gorithms we are concerned with are online algorithms, in the sense that they apply the constraints specified by the augmentation of a rule a~ soon as the rule is applied. In contrast, an olTline parsing algorithm will consist of two phases: a context-free parsing algorithm followed by application of the constraints to all the resulting analyses.The pap('r is organized as follows. Section 2 gives an overview of the concepts of definite clause logic, definite clause grammars, definite clause proof procedures, and chart parsing, Section 3 discusses the connection betwee DCGs and LFG.Section 4 describes the Earley Deduction definite-clause proof procedure. Section 5 then brings out the connection between Earley Deduction and chart parsing, and shows the added generality brought in by the proof procedure approach. Section 6 outlines some oi the problems of implementing Earley Deduction and similar parsing procedure~. Finally, Section 7 discusses questions of computational complexity and decidability.Chart parsing {Kay, I980) and other tabular parsing algorithms (Aho and Ullman, 1972; Graham et al., I980) are usually presented in terms of certain (abstract) data structures that keep a record of the alternatives being explored by the parser. Looking at parsing procedures as proof procedures has the following advantages:(i) unification, ~aps and unbounded dependencies are automatically handled: (ii} parsing strategies become possible that cannot be formulated in chart parsing.The chart represents completed nonterminals {passive edges) and partially applied rules {active edges). From the standpoint of Earley Deduction, both represent derived clauses that have been proved in the course of an attempt to deduce a goal statement whose meaning is that a string belongs to the language generated by the grammar. An active edge corresponds to a nonunit clause, a passive edge to a unit clause. Nowhere in this definition is there mention of i.he "endpoints" of the edges. The endpoints correspond to certain literal arguments, and are of no concern to the (abstract) proof procedure. Endpoints are just a convenient way of indexing derived clauses in an implementalion to reduce the number of nonproductive (nonunifying) attempts at applying the reduction rule.We shall give now an example of the application of Earley Deduction to parsing, corresponding to the chart of Figure I (301 n{.ts).Thus. the t~k of determining whether (26) is a sentence can be represented by the goal statement ans ~ s(0.5).If the sentence is in the language, the unit clause ass will be derived in the course of an Eariey Deduction proof. S.ch a pro(_)f could proceed as follows:• ns = s(0,5), goal statement Note how subsumption is used to curtail the left recursion of rules (21) and (22), by stopping extraneous instantiation steps from the derived clauses (35) and (36).As we have seen in the example of the previous section, this mechanism is a general one, capable of handling complex grammar symbols within certain constraints that will be discussed later.The Earley Deduction derivation given above corresponds directly to the chart in Figure 1 .In general, chart parsing cannot support strategies that would create active edges by reducing the symbols in the right-hand side of a rule in any arbitrary order. This is because an active edge must correspond to a contiguous sequence of analyzed symbols. Definite clause proof procedures do not have this limitation. For example, it is very simple t.o define a strategy, "head word nar¢,ng -(NlgCord, 19801, which would use the" reduction rule to infer np(SO,S) = deqS0,2) & rel{3,S}. Each arc in tile chart is labeled with the number of a clause in the proof. In each clause that, corresponds to a chart arc, two literal arguments correspond to the two endpoints of the arc. These arguments have been underlined in the derivation. Notice how the endpoint arguments are tile two string arguments in the head for unit clauses {passive edges) but, in the case of nonunit clauses (passive edges), are the first string argument in the head and the first in the leftmost literal in the body.As we noted before, our view of parsing as deduction makes it possible to derive general parsing mechanisms for augmented phraso-structure grammars with gaps and unbounded dependencies. It is difficult (especially in the case of pure bottom-up parsing strategies} to augment chart parser~ to handle gaps and dependencies (Thompson, 1981} . However, if gaps and dependencies are specified by extra predicate arguments in the clauses that correspond to the rules, the general proof procedures will handle those phenomena without further change. This is the technique used in DCGs and is the basis of the specialized extra.position grammar formalism (Pereira, t081).The increased generality of our approach in the area of parsing strategy stems from the fact that chart parsing strategies correspond to specialized proof procedures for definite clauses with string arguments. In other words, the origin of these proof procedures means that string arguments are treated differently from other arguments, as they correspond to the chart nodes. [NP ---Det N Rei] n(2,3).[There is an N between points 2 and 3 in the input]This example shows that the class of parsing strategies allowed in the deductive approach is broader than what is p,,ssible in the chart parsing approach. It remains to be shown which of those strategies will have practical importance as well.
A definite clause has the form P:Q~&... &Q.. to be read as "P is true if Q1 and ... and Qa are true". If n --~ 0, the clause is a unit clause and is written simply as P. P and QI ..... Qn are literals. P is the positive literal or head of the clause; Ql .... , Qn are the negative literals, forming the body of the clause. Literals have the forn~ pit I ..... tk), where p is the predicate of arity k and the t i the arguments. The arguments are terms. A term may be: a variable {variable names start with capital letters); a constant; a compound term J~tl,...,t m) where f is a functor of arit$ m and the t i are terms.All the variables in a clause are implicitly universally quantified.A set of definite clauses forms a program, and the clauses in a program are called input clauses. A program defines the relations denoted by the predicates appearing in the heads of clauses. When using a definiteclause proof procedure, such as Prolog (Roussel. 1975) , a goal statement requests the proof procedure to find provable instances of P. The variables S i are the string arguments, representing positions m the input string. For example, the context-free rule "S ~ NP VP" is translated into "s(S0,S2) np{,qO.Sl} k" vp(S1,S2)," which can be paraphrased as "'there is an S from SO to $2 in the input string if there is an NP from SO to S1 and a V'P from S1 to 82."Given the translation of a context-free grammar G with start symbol S into a set of definite clauses G" with corresponding predicate s, to say that a string w is in the grammar's language is equivalent to saying that the start goal S{po,pj is a consequence of G" U W, where Po and p represent the left and right endpoints of u,, and W is a set of unit clauses that represents w. It is easy to generalize the above notions to define DCGs. DCG nonterminals have arguments in the same way that predicates do. A DCG nonterminal with u arguments is translated into a predicate of n+2 arguments, the last two of which are the string points, as in the translation of context-free rules into definite clauses. The context-free grammar obtained from a DCG by dropping all nonterminal arguments is the contextfree skeleton of the DCG.The fundamental inference rule for definite clauses is the following resolution rule: From the clauses (2).B ¢= A l £: ... & A m . (l) C: D 1 & ,.. & D i & ... & DThe proof procedure of Prolog is just a particular embedding of the resolution rule in a search procedure, in which a goal clause like (2) is successively rewritten by the res,qution rule using clauses from the program (1). The Prolog proof procedure can be implemented very efficiently, but it has the same theoretical problems of the top-d¢.wn backtrack parsing algorithms after which it is motif?led. These problems do not preclude its use for creating uniquely efficient parsers for suitably constructed grammars (Warren and Pereira, 1983: Pereira, 1982) , but the broader questions of the relation between parsing and deduction and of the derivation of online parsing algorithms for unification formalisms require that we look at a more generally applicable class of proof procedures.Chart parsing is a general framework for constructing parsing algorithms for context-free grammars and related formalisms. The Earley context-free parsing algorithm, although independently developed, can be seen as a particular case ,)f chart parsing. We will give here just the basic terminolog-y of chart parsing and of the Eartey algorithm. Full accounts can be found in the articles by Kay (Kay. l.qS0} and Earley/Earley, 1970) .The state of a chart parser is represented by the chart. which is a directed graph. The nodes of the chart represent positions in the string being analyzed. Each odge in Ihe chart is either active or passive. Both types of edges are labeled. A passive edge with label ,V links node r to node .~ if the string between r and s h,~ been analyzed as a phr,'tse of type N. Initially, the only edges are passive edges that link consecutive nodes and are labeh,d with Ihe words of the input string (see Figure I} . Active edges represent partially applied grammar rules. In the siml)le~.t case, active edges are labeled by dotted rules. A dolled rule is a grammar rule with a dot inserted some~vhcre on its right-hand sideX---% ... ~i-I • ~i-'" % {4)An edge with this label links node r to node s if the sentential form ~! ... o%1 is an analysis of the input string between r and s. An active edge that links a node to itself is called empty and acts like a top-down prediction. Chart-parsing procedures start with a chart containing the passive edges for the input string. New edges are added in two distinct ways. First, an active edge from r to s labeled with a dotted rule {4) combines with a passive edge from s to t with label a i to produce a new edge from r to t, which will be a passive edge with label X if a i is the last symbol in the right-hand side of the dotted rule; otherwise it will be an active edge with the dot advanced over cr i. Second, the parsing strategy must place into the chart, at appropriate points, new empty active edges that will be used to combine existing passive edges. The exact method used determines whether the parsing method is seen as top-down, bottom*up, or a combination of the two.The Earley parsing algorithm can be seen as a special case of chart parsing in which new empty active edges are introduced top-down and, for all k, the edge combinations involving only the first k nodes are done before any combinations that involve later nodes. This particular strategy allows certain simplifications to be made in the general algorithm.We would like to make a few informal observations at this point, to clarify the relationship between DCGs and other unification grammar formalisms --LFG in particular. A more detailed discussion would take us beyond the intended scope of this paper.The diffl,rcnt nolational conventions of DCGs and LFG make the two formalisms less similar on the surface than the), actually are from the computational point of view. The object~ that appear ,as arguments in DCG rules are tree fragments every node of which has a number of children predetermined by the functor that labels the node. Explicit variables mark unspecified parts of the tree. In contrast, the functional structure nodes that are implicitly mentioned in LFG equations do not have a pred(,fined number of children, and unspecified parts are either omitted or defined implicitly through equations. is an np with structure Subj followed by a vp with structure Obj." The LFG rule can be read as "an S is an NP followed by a V'P, where the value of the subj attribute of the S is the functional structure of the NP and the value of the attribute obj of the S is the functional structure of the VP." For those familiar with the details of the mapping from functional descriptions to functional structures in LFG, DCG variables are just "placeholder" symbols (Bresnan and Kaplan, 1982 ).As we noted above, an apparent difference between LFG and DCGs is that LFG functional structure nodes, unlike DCG function symbols, do not have a definite number of children.Although we mu~t leave to a separate paper the details of the application to LFG of the unification algorithms from theorem proving, we will note here that the formal properties of logical and LFG or UG unification are similar, and there are adaptations to LFG and UG of the algorithms and data structures used in the logical case.The Earley Deduction proof procedure schema is named after Earley's context-free parsing algorithm (Earley, 1970) , on which it is based Earley Deduction provides for definite clauses the same kind of mixed top-down bottom-up mechanism that the Earley parsing algorithm provides for context-free grammars.Earley Deduction operates on two sets of definite clauses called the program and the state. The program is just the set of input clauses and remains fixed. The state consists of a set of derived clauses, where each nonunit .:Iause has one of its negative literals selected; the state is continually being added to. Whenever a nonunit clause is added to the state, one of its negative literals is selected. Initially tile state contains just the goal statement (with one of its negative [iterals selected}.There are two inference rules, called instantiation and reduction, which can map the current state into a new one by adding a new derived clause. For an instantiation step, there is some clause in the current state whose selected literal unifies with the positive literal of a ,onunit clause C in the program.In this case, the derived clause is a [C] , where cr is a most general unifier ([~obinson, 1965} of the two literals concerned. The selected literal is said to instantiate C to a[C].For a reduction step, there is some clause C in the current state whose selected literal unifies with a unit clause from either the program or the current state. In this case, tile derived clause is siC'l, where a is a most general unifier of the two Iiterals concerned, and C" is C minus its selected literal. Thus, the deriydd clause is just the res,)lvent of C with the unit clause and the latter is said to reduce C to a(C" I.Before a derived clause is added to the state, a check is made to see whether the derived clause is subsumed by any clause already in the state. [f the derived clause is subsumed, it is not added to the state, and that inference step is said to be blocked.In the examples that follow, we assume that the selected literal in a derived clause is always the leftmost literal in the body. This choice is not optimal (Kowalski, 1980) , but it is sufficient for our purposes.For example, given the program cl.X:,Z) = c(X,Y) & c(Y,Z). At this point, all further steps are blocked, so the computation terminates.Earley Deduction generalizes Earley parsing in a direct and natural way.[nstantiation is analogous to the "predictor" operation of Earley's algorithm, while reduction corresponds to the "scanner" and "completer" operations.The "scanner" operation amounts to reduction with an input unit clause representing a terminal symbol occurrence, while the "completer" operation amounts to reduction with a derived unit clause representing a nonterminal symbol occurrence.
null
null
To implement Earley Deduction with an efficiency comparable, say. to Prolog, presents some challenging problems. The main issues are •tlow to represent the derived clauses, especially the substitutions involved.• ttow to avoid the very heavy computational cost of subsunlption.• How to recognize when derived clauses are no longer 2This particular strategy could be implemented ia a chart parser, by changing the rules for combining edges but the generality demonstrated here would be lost.needed and space can be recovered.There are two basic methods for representing derived clauses in resolution systems: the more direct copying method, in which substitutions are applied explicitly; the structure-shaelng method of Bayer and Moore, which avoids copying by representing derived clauses implicitly with the aid of variable binding environments.A promising strategy for Earley Deduction might be to use copying for derived unit clauses, structure sharing for other derived clauses. When copying, care should be taken not to copy variable-free subterms, but to copy just pointers to those subterrns instead.It is very costly to implement subsumption in its full generality. To keep the cost within reasonable bounds, it will be essential to index the derived clauses on at least the predicate symbols they contain --and probably also. on symbols in certain key argument positions.A simpfification of full subsumption checking that would appear adequate to block most redundant steps is to keep track of selected literals that have been used exhaustively to generate instantiation steps. If another selected literal is an instance of one that has been exhaustively explored, there is no need to consider using it as a candidate for instantiation steps, Subsuvnption would then be only applied to derived unit clauses.A major efficiency problem with Earley deduction is that it is difficult to recognize situations in which derived clauses are no longer needed and space can be reclaimed. There is a marked contrast with purely top-down proof procedures, such as Prolog, to which highly effective ~pace recovery techniques can be applied relatively easily. The Eartey algorithm pursues all possible parses in parallel, indexed by string position. In principle, this permits space to be recovered, as parsing progresses, by deleting information relating to earlier string positions, l't amy be possible to generalize this technique to Earley Deduction. by recognizing, either automatically or manually, certain special properties of the input clauses.It is not at. all obvious that grammar formalisms based on unification can be parsed within reasonable bounds of time and space. [n fact, unrestricted DCGs have Turing machine power, and LFG, although decidable, seems capable of encoding exponentially hard problems. llowever, we need not give up our interest in the complexity analysis of unification-based parsing. Whether for interesting subclasses of, grammars or specific ~rammars of interest, it is still important to determine how efficient parsing can be. A basic step in that direction is to estimale the cost added by unification to the operation of combining {reducing or expanding) a nontcrmin.~l in a derivation with a nonterminal in a grammar rule.Because definite clauses are only semidecidable, general proof procedures may not terminate for some sets of definite clauses.However, the specialized proof procedures we have derived from parsing algorithms are stable: if a set of definite clauses G is the translation of a context-free grammar, the procedure will always terminate (in success or failure) when to proving any start goal for G. More interesting in this context is the notion of strong stability, which depends on the following notion of off'line parsability. A DCG is offline-parsable if its context-free skeleton is not infinitely ambiguous. Using different terminology, Bresnan and Kaplan (Bresnan and Kaplan, 1982) have shown that the parsing problem for LFG is decidable because LFGs are offline parsable. This result can be adapted easily to DCGs, showing that the parsing problem for offline-parsable DCGs is decidable. Strong stability can now be defined: a parsing algorithm is strongly stable if it always terminates for offline-parsab[e grammars. For example, a direct DCG version of the Earley parsing algorithm is stable but not strongly so.In the following complexity arguments, we restrict ourselves to offline-parsable grammars. This is a reasonable restriction for two reasons: (i) since general DCGs have Turing machine power, there is no useful notion of computational complexity for the parser on its own; (ii) (.here are good reasons to believe that linguistically relevant grammars must be offliae-parsable {Bresnan and Kaplaa, 1982) .In estimating the added complexity of doing online unification, we start from the fact that the length of any derivation of a terminal string in a finitely ambiguous context-free grammar is linearly bounded by the length of the termin:fi string. The proof of this fact is omitted for lack of spa~.e, but can be found elsewhere (Pereira and Warren, 1.q83).General definite-clause proof procedures need to access ttle values of variables {bindings} in derived clauses. The strueture-sh:lring method of representation makes the lime to access a variable binding at worst linear in the length of 1he derivation. Furthermore, the number of variables to be looked up in a derivation step is at worst linear in the size of tile derivation. Finally, the time (and space) to finish a derivation step, once all the relevant bindings are known, does not depend on the size of the derivation.Therefore, using this method for parsing offline-parsable grammars makes the time complexity of each step at worst oIn 2) in the length of the input.Some simplifications are possible that improve that time bound. First, it, is possible to use a value array rcpresenta~i(m of hinding~ (Bayer and Moore. 1972} while exploring any given derivation path. reducing to a constant the variable lookup time at the cost of having to save and restore o(n} variable bindings from the value array each time the parsing procedure moves to explore a different derivation path. Secondly, the unification cost can be mode independent of the derivation length, if we for~o the occurs check that prevents a variable from being bound to a term containing it. Finally, the combination of structure sharing and copying suggested in the last section eliminates the overhead of switching to a different derivation path in the value array method at the cost of a uniform o(log n) time to look up or create a variabl, binding in a balanced binary tree.When adding a new edge to the chart, a chart parser must verify that no edge with the same label between the same nodes is already present. In general DCG parsing (and therefore in online parsing with any unificationbased formalism}, we cannot check for the "same label" (same lemma), because lemmas in general will contain variables. \Ve must instead check for subsumption of the new lemma by some old lemma.The obvious subsumption checking mechanism has an o(n 3) worst case cost, but the improved binding representations described above, together with the other special techniques mentioned in the previous section, can be used to reduce this cost in practice.We do not yet have a full complexity comparison between online and offline parsing, but it is easy to envisage situations in which the number of edges created by an online algorithm is much smaller than that for the corresponding offline algorithm, whereas the cost of applying the unification constraints is the same for both algorithms.We have outlined an approach to the problems of parsing unification-based grammar formalisms that builds on the relationship between parsing and definite-clause deduction.Several theoretical and practical problems remain. Among these are the question of recognizing derived clauses that are no longer useful in Earley-style parsing, the design of restricted formalisms with a polynomial bound on the number of distinct derived clauses, and independent characterizations of the classes of offlineparsable grammars and languages.
Main paper: introduction: The aim of this paper is to explore the relationship between parsing and deduction. The basic notion, which goes back to Kowaiski (Kowalski, 1980} and Colmerauer {Colmeraucr, 1978) , h'zs seen a very efficient, if limited, realization in tile use of the logic programming language Prolog for parsing {Colmerauer, 1978; Pereira and Warren, 1980) . The connection between parsing and deduction was developed further in the design of the Eariey Deduction proof procedure (Warren, 1975) , which will also be discussed at length here.• A theoretically clean mechanism to connect parsing with the inference needed for semantic interpretation.llandling of gaps and unbounded dependencies "on the fly" without adding special mechanisms to the parser.:\ reinterprecation and generalization of chart parsing that abstracts from unessential datastructure details.• Elucidation of parsing complexity issues for related formalisms, in particular lexieal-functional grammar (LFG).Our study of these topics is still far from complete; therefore, besides offering some initial results, we shall discuss various outstanding questions.The connection between parsing and deduction is based on the axiomatization of context-free grammars in definite clauses, a particularly simple subset of firstorder logic (Kowalski, 1080; van Emden and Kowalski, 1976) . This axiomatization allows us to identify contextfree parsing algorithms with proof procedures for a restricted class of definite clauses, those derived from context-free rules. This identification can then be generalized to inc{ude larger classes of definite clauses to which the same algorithms can be applied, with simple modifications. Those larger classes of definite clauses can be seen as grammar formalisms in which the atomic grammar symbols of context-free grammars have been replaced by complex symbols that are matched by unification (Robinson, 1965; Colmerauer, 1978; Pereir3 and Warren, 1980} . The simplest of these formalisms is definite-clause grammars (DCG) (Pereira and Warren, 1980) . There is a close relationship between DCGs ~nd other ~,rammar formalisms based on unification, such as Unification Grammar {UG) (Kay, 1070) , LFG, PATR-2 {Shieber. 1083) and the more recent versions of GPSG (Gazdar and Pullum, 1082) .The parsing a{gorithms we are concerned with are online algorithms, in the sense that they apply the constraints specified by the augmentation of a rule a~ soon as the rule is applied. In contrast, an olTline parsing algorithm will consist of two phases: a context-free parsing algorithm followed by application of the constraints to all the resulting analyses.The pap('r is organized as follows. Section 2 gives an overview of the concepts of definite clause logic, definite clause grammars, definite clause proof procedures, and chart parsing, Section 3 discusses the connection betwee DCGs and LFG.Section 4 describes the Earley Deduction definite-clause proof procedure. Section 5 then brings out the connection between Earley Deduction and chart parsing, and shows the added generality brought in by the proof procedure approach. Section 6 outlines some oi the problems of implementing Earley Deduction and similar parsing procedure~. Finally, Section 7 discusses questions of computational complexity and decidability. definite clauses: A definite clause has the form P:Q~&... &Q.. to be read as "P is true if Q1 and ... and Qa are true". If n --~ 0, the clause is a unit clause and is written simply as P. P and QI ..... Qn are literals. P is the positive literal or head of the clause; Ql .... , Qn are the negative literals, forming the body of the clause. Literals have the forn~ pit I ..... tk), where p is the predicate of arity k and the t i the arguments. The arguments are terms. A term may be: a variable {variable names start with capital letters); a constant; a compound term J~tl,...,t m) where f is a functor of arit$ m and the t i are terms.All the variables in a clause are implicitly universally quantified.A set of definite clauses forms a program, and the clauses in a program are called input clauses. A program defines the relations denoted by the predicates appearing in the heads of clauses. When using a definiteclause proof procedure, such as Prolog (Roussel. 1975) , a goal statement requests the proof procedure to find provable instances of P. The variables S i are the string arguments, representing positions m the input string. For example, the context-free rule "S ~ NP VP" is translated into "s(S0,S2) np{,qO.Sl} k" vp(S1,S2)," which can be paraphrased as "'there is an S from SO to $2 in the input string if there is an NP from SO to S1 and a V'P from S1 to 82."Given the translation of a context-free grammar G with start symbol S into a set of definite clauses G" with corresponding predicate s, to say that a string w is in the grammar's language is equivalent to saying that the start goal S{po,pj is a consequence of G" U W, where Po and p represent the left and right endpoints of u,, and W is a set of unit clauses that represents w. It is easy to generalize the above notions to define DCGs. DCG nonterminals have arguments in the same way that predicates do. A DCG nonterminal with u arguments is translated into a predicate of n+2 arguments, the last two of which are the string points, as in the translation of context-free rules into definite clauses. The context-free grammar obtained from a DCG by dropping all nonterminal arguments is the contextfree skeleton of the DCG.The fundamental inference rule for definite clauses is the following resolution rule: From the clauses (2).B ¢= A l £: ... & A m . (l) C: D 1 & ,.. & D i & ... & DThe proof procedure of Prolog is just a particular embedding of the resolution rule in a search procedure, in which a goal clause like (2) is successively rewritten by the res,qution rule using clauses from the program (1). The Prolog proof procedure can be implemented very efficiently, but it has the same theoretical problems of the top-d¢.wn backtrack parsing algorithms after which it is motif?led. These problems do not preclude its use for creating uniquely efficient parsers for suitably constructed grammars (Warren and Pereira, 1983: Pereira, 1982) , but the broader questions of the relation between parsing and deduction and of the derivation of online parsing algorithms for unification formalisms require that we look at a more generally applicable class of proof procedures.Chart parsing is a general framework for constructing parsing algorithms for context-free grammars and related formalisms. The Earley context-free parsing algorithm, although independently developed, can be seen as a particular case ,)f chart parsing. We will give here just the basic terminolog-y of chart parsing and of the Eartey algorithm. Full accounts can be found in the articles by Kay (Kay. l.qS0} and Earley/Earley, 1970) .The state of a chart parser is represented by the chart. which is a directed graph. The nodes of the chart represent positions in the string being analyzed. Each odge in Ihe chart is either active or passive. Both types of edges are labeled. A passive edge with label ,V links node r to node .~ if the string between r and s h,~ been analyzed as a phr,'tse of type N. Initially, the only edges are passive edges that link consecutive nodes and are labeh,d with Ihe words of the input string (see Figure I} . Active edges represent partially applied grammar rules. In the siml)le~.t case, active edges are labeled by dotted rules. A dolled rule is a grammar rule with a dot inserted some~vhcre on its right-hand sideX---% ... ~i-I • ~i-'" % {4)An edge with this label links node r to node s if the sentential form ~! ... o%1 is an analysis of the input string between r and s. An active edge that links a node to itself is called empty and acts like a top-down prediction. Chart-parsing procedures start with a chart containing the passive edges for the input string. New edges are added in two distinct ways. First, an active edge from r to s labeled with a dotted rule {4) combines with a passive edge from s to t with label a i to produce a new edge from r to t, which will be a passive edge with label X if a i is the last symbol in the right-hand side of the dotted rule; otherwise it will be an active edge with the dot advanced over cr i. Second, the parsing strategy must place into the chart, at appropriate points, new empty active edges that will be used to combine existing passive edges. The exact method used determines whether the parsing method is seen as top-down, bottom*up, or a combination of the two.The Earley parsing algorithm can be seen as a special case of chart parsing in which new empty active edges are introduced top-down and, for all k, the edge combinations involving only the first k nodes are done before any combinations that involve later nodes. This particular strategy allows certain simplifications to be made in the general algorithm. dcgs and lfg: We would like to make a few informal observations at this point, to clarify the relationship between DCGs and other unification grammar formalisms --LFG in particular. A more detailed discussion would take us beyond the intended scope of this paper.The diffl,rcnt nolational conventions of DCGs and LFG make the two formalisms less similar on the surface than the), actually are from the computational point of view. The object~ that appear ,as arguments in DCG rules are tree fragments every node of which has a number of children predetermined by the functor that labels the node. Explicit variables mark unspecified parts of the tree. In contrast, the functional structure nodes that are implicitly mentioned in LFG equations do not have a pred(,fined number of children, and unspecified parts are either omitted or defined implicitly through equations. is an np with structure Subj followed by a vp with structure Obj." The LFG rule can be read as "an S is an NP followed by a V'P, where the value of the subj attribute of the S is the functional structure of the NP and the value of the attribute obj of the S is the functional structure of the VP." For those familiar with the details of the mapping from functional descriptions to functional structures in LFG, DCG variables are just "placeholder" symbols (Bresnan and Kaplan, 1982 ).As we noted above, an apparent difference between LFG and DCGs is that LFG functional structure nodes, unlike DCG function symbols, do not have a definite number of children.Although we mu~t leave to a separate paper the details of the application to LFG of the unification algorithms from theorem proving, we will note here that the formal properties of logical and LFG or UG unification are similar, and there are adaptations to LFG and UG of the algorithms and data structures used in the logical case. earley deduction: The Earley Deduction proof procedure schema is named after Earley's context-free parsing algorithm (Earley, 1970) , on which it is based Earley Deduction provides for definite clauses the same kind of mixed top-down bottom-up mechanism that the Earley parsing algorithm provides for context-free grammars.Earley Deduction operates on two sets of definite clauses called the program and the state. The program is just the set of input clauses and remains fixed. The state consists of a set of derived clauses, where each nonunit .:Iause has one of its negative literals selected; the state is continually being added to. Whenever a nonunit clause is added to the state, one of its negative literals is selected. Initially tile state contains just the goal statement (with one of its negative [iterals selected}.There are two inference rules, called instantiation and reduction, which can map the current state into a new one by adding a new derived clause. For an instantiation step, there is some clause in the current state whose selected literal unifies with the positive literal of a ,onunit clause C in the program.In this case, the derived clause is a [C] , where cr is a most general unifier ([~obinson, 1965} of the two literals concerned. The selected literal is said to instantiate C to a[C].For a reduction step, there is some clause C in the current state whose selected literal unifies with a unit clause from either the program or the current state. In this case, tile derived clause is siC'l, where a is a most general unifier of the two Iiterals concerned, and C" is C minus its selected literal. Thus, the deriydd clause is just the res,)lvent of C with the unit clause and the latter is said to reduce C to a(C" I.Before a derived clause is added to the state, a check is made to see whether the derived clause is subsumed by any clause already in the state. [f the derived clause is subsumed, it is not added to the state, and that inference step is said to be blocked.In the examples that follow, we assume that the selected literal in a derived clause is always the leftmost literal in the body. This choice is not optimal (Kowalski, 1980) , but it is sufficient for our purposes.For example, given the program cl.X:,Z) = c(X,Y) & c(Y,Z). At this point, all further steps are blocked, so the computation terminates.Earley Deduction generalizes Earley parsing in a direct and natural way.[nstantiation is analogous to the "predictor" operation of Earley's algorithm, while reduction corresponds to the "scanner" and "completer" operations.The "scanner" operation amounts to reduction with an input unit clause representing a terminal symbol occurrence, while the "completer" operation amounts to reduction with a derived unit clause representing a nonterminal symbol occurrence. chart parsing and earley deduction: Chart parsing {Kay, I980) and other tabular parsing algorithms (Aho and Ullman, 1972; Graham et al., I980) are usually presented in terms of certain (abstract) data structures that keep a record of the alternatives being explored by the parser. Looking at parsing procedures as proof procedures has the following advantages:(i) unification, ~aps and unbounded dependencies are automatically handled: (ii} parsing strategies become possible that cannot be formulated in chart parsing.The chart represents completed nonterminals {passive edges) and partially applied rules {active edges). From the standpoint of Earley Deduction, both represent derived clauses that have been proved in the course of an attempt to deduce a goal statement whose meaning is that a string belongs to the language generated by the grammar. An active edge corresponds to a nonunit clause, a passive edge to a unit clause. Nowhere in this definition is there mention of i.he "endpoints" of the edges. The endpoints correspond to certain literal arguments, and are of no concern to the (abstract) proof procedure. Endpoints are just a convenient way of indexing derived clauses in an implementalion to reduce the number of nonproductive (nonunifying) attempts at applying the reduction rule.We shall give now an example of the application of Earley Deduction to parsing, corresponding to the chart of Figure I (301 n{.ts).Thus. the t~k of determining whether (26) is a sentence can be represented by the goal statement ans ~ s(0.5).If the sentence is in the language, the unit clause ass will be derived in the course of an Eariey Deduction proof. S.ch a pro(_)f could proceed as follows:• ns = s(0,5), goal statement Note how subsumption is used to curtail the left recursion of rules (21) and (22), by stopping extraneous instantiation steps from the derived clauses (35) and (36).As we have seen in the example of the previous section, this mechanism is a general one, capable of handling complex grammar symbols within certain constraints that will be discussed later.The Earley Deduction derivation given above corresponds directly to the chart in Figure 1 .In general, chart parsing cannot support strategies that would create active edges by reducing the symbols in the right-hand side of a rule in any arbitrary order. This is because an active edge must correspond to a contiguous sequence of analyzed symbols. Definite clause proof procedures do not have this limitation. For example, it is very simple t.o define a strategy, "head word nar¢,ng -(NlgCord, 19801, which would use the" reduction rule to infer np(SO,S) = deqS0,2) & rel{3,S}. Each arc in tile chart is labeled with the number of a clause in the proof. In each clause that, corresponds to a chart arc, two literal arguments correspond to the two endpoints of the arc. These arguments have been underlined in the derivation. Notice how the endpoint arguments are tile two string arguments in the head for unit clauses {passive edges) but, in the case of nonunit clauses (passive edges), are the first string argument in the head and the first in the leftmost literal in the body.As we noted before, our view of parsing as deduction makes it possible to derive general parsing mechanisms for augmented phraso-structure grammars with gaps and unbounded dependencies. It is difficult (especially in the case of pure bottom-up parsing strategies} to augment chart parser~ to handle gaps and dependencies (Thompson, 1981} . However, if gaps and dependencies are specified by extra predicate arguments in the clauses that correspond to the rules, the general proof procedures will handle those phenomena without further change. This is the technique used in DCGs and is the basis of the specialized extra.position grammar formalism (Pereira, t081).The increased generality of our approach in the area of parsing strategy stems from the fact that chart parsing strategies correspond to specialized proof procedures for definite clauses with string arguments. In other words, the origin of these proof procedures means that string arguments are treated differently from other arguments, as they correspond to the chart nodes. [NP ---Det N Rei] n(2,3).[There is an N between points 2 and 3 in the input]This example shows that the class of parsing strategies allowed in the deductive approach is broader than what is p,,ssible in the chart parsing approach. It remains to be shown which of those strategies will have practical importance as well. implementing earley deduction: To implement Earley Deduction with an efficiency comparable, say. to Prolog, presents some challenging problems. The main issues are •tlow to represent the derived clauses, especially the substitutions involved.• ttow to avoid the very heavy computational cost of subsunlption.• How to recognize when derived clauses are no longer 2This particular strategy could be implemented ia a chart parser, by changing the rules for combining edges but the generality demonstrated here would be lost.needed and space can be recovered.There are two basic methods for representing derived clauses in resolution systems: the more direct copying method, in which substitutions are applied explicitly; the structure-shaelng method of Bayer and Moore, which avoids copying by representing derived clauses implicitly with the aid of variable binding environments.A promising strategy for Earley Deduction might be to use copying for derived unit clauses, structure sharing for other derived clauses. When copying, care should be taken not to copy variable-free subterms, but to copy just pointers to those subterrns instead.It is very costly to implement subsumption in its full generality. To keep the cost within reasonable bounds, it will be essential to index the derived clauses on at least the predicate symbols they contain --and probably also. on symbols in certain key argument positions.A simpfification of full subsumption checking that would appear adequate to block most redundant steps is to keep track of selected literals that have been used exhaustively to generate instantiation steps. If another selected literal is an instance of one that has been exhaustively explored, there is no need to consider using it as a candidate for instantiation steps, Subsuvnption would then be only applied to derived unit clauses.A major efficiency problem with Earley deduction is that it is difficult to recognize situations in which derived clauses are no longer needed and space can be reclaimed. There is a marked contrast with purely top-down proof procedures, such as Prolog, to which highly effective ~pace recovery techniques can be applied relatively easily. The Eartey algorithm pursues all possible parses in parallel, indexed by string position. In principle, this permits space to be recovered, as parsing progresses, by deleting information relating to earlier string positions, l't amy be possible to generalize this technique to Earley Deduction. by recognizing, either automatically or manually, certain special properties of the input clauses. decidability and computational complexity: It is not at. all obvious that grammar formalisms based on unification can be parsed within reasonable bounds of time and space. [n fact, unrestricted DCGs have Turing machine power, and LFG, although decidable, seems capable of encoding exponentially hard problems. llowever, we need not give up our interest in the complexity analysis of unification-based parsing. Whether for interesting subclasses of, grammars or specific ~rammars of interest, it is still important to determine how efficient parsing can be. A basic step in that direction is to estimale the cost added by unification to the operation of combining {reducing or expanding) a nontcrmin.~l in a derivation with a nonterminal in a grammar rule.Because definite clauses are only semidecidable, general proof procedures may not terminate for some sets of definite clauses.However, the specialized proof procedures we have derived from parsing algorithms are stable: if a set of definite clauses G is the translation of a context-free grammar, the procedure will always terminate (in success or failure) when to proving any start goal for G. More interesting in this context is the notion of strong stability, which depends on the following notion of off'line parsability. A DCG is offline-parsable if its context-free skeleton is not infinitely ambiguous. Using different terminology, Bresnan and Kaplan (Bresnan and Kaplan, 1982) have shown that the parsing problem for LFG is decidable because LFGs are offline parsable. This result can be adapted easily to DCGs, showing that the parsing problem for offline-parsable DCGs is decidable. Strong stability can now be defined: a parsing algorithm is strongly stable if it always terminates for offline-parsab[e grammars. For example, a direct DCG version of the Earley parsing algorithm is stable but not strongly so.In the following complexity arguments, we restrict ourselves to offline-parsable grammars. This is a reasonable restriction for two reasons: (i) since general DCGs have Turing machine power, there is no useful notion of computational complexity for the parser on its own; (ii) (.here are good reasons to believe that linguistically relevant grammars must be offliae-parsable {Bresnan and Kaplaa, 1982) .In estimating the added complexity of doing online unification, we start from the fact that the length of any derivation of a terminal string in a finitely ambiguous context-free grammar is linearly bounded by the length of the termin:fi string. The proof of this fact is omitted for lack of spa~.e, but can be found elsewhere (Pereira and Warren, 1.q83).General definite-clause proof procedures need to access ttle values of variables {bindings} in derived clauses. The strueture-sh:lring method of representation makes the lime to access a variable binding at worst linear in the length of 1he derivation. Furthermore, the number of variables to be looked up in a derivation step is at worst linear in the size of tile derivation. Finally, the time (and space) to finish a derivation step, once all the relevant bindings are known, does not depend on the size of the derivation.Therefore, using this method for parsing offline-parsable grammars makes the time complexity of each step at worst oIn 2) in the length of the input.Some simplifications are possible that improve that time bound. First, it, is possible to use a value array rcpresenta~i(m of hinding~ (Bayer and Moore. 1972} while exploring any given derivation path. reducing to a constant the variable lookup time at the cost of having to save and restore o(n} variable bindings from the value array each time the parsing procedure moves to explore a different derivation path. Secondly, the unification cost can be mode independent of the derivation length, if we for~o the occurs check that prevents a variable from being bound to a term containing it. Finally, the combination of structure sharing and copying suggested in the last section eliminates the overhead of switching to a different derivation path in the value array method at the cost of a uniform o(log n) time to look up or create a variabl, binding in a balanced binary tree.When adding a new edge to the chart, a chart parser must verify that no edge with the same label between the same nodes is already present. In general DCG parsing (and therefore in online parsing with any unificationbased formalism}, we cannot check for the "same label" (same lemma), because lemmas in general will contain variables. \Ve must instead check for subsumption of the new lemma by some old lemma.The obvious subsumption checking mechanism has an o(n 3) worst case cost, but the improved binding representations described above, together with the other special techniques mentioned in the previous section, can be used to reduce this cost in practice.We do not yet have a full complexity comparison between online and offline parsing, but it is easy to envisage situations in which the number of edges created by an online algorithm is much smaller than that for the corresponding offline algorithm, whereas the cost of applying the unification constraints is the same for both algorithms. conclusion: We have outlined an approach to the problems of parsing unification-based grammar formalisms that builds on the relationship between parsing and definite-clause deduction.Several theoretical and practical problems remain. Among these are the question of recognizing derived clauses that are no longer useful in Earley-style parsing, the design of restricted formalisms with a polynomial bound on the number of distinct derived clauses, and independent characterizations of the classes of offlineparsable grammars and languages. Appendix:
null
null
null
null
{ "paperhash": [ "warren|an_efficient_easily_adaptable_system_for_interpreting_natural_language_queries", "kowalski|logic_for_problem_solving", "pereira|extraposition_grammars", "thompson|chart_parsing_and_rule_schemata_in_psg", "graham|an_improved_context-free_recognizer", "emden|the_semantics_of_predicate_logic_as_a_programming_language", "earley|an_efficient_context-free_parsing_algorithm", "allen|a_functional_grammar", "mccord|slot_grammars", "boyerroger|ttle_sharing_of_structure_in_theorem_proving_programs", "aho|the_theory_of_parsing,_translation,_and_compiling", "robinson|a_machine-oriented_logic_based_on_the_resolution_principle" ], "title": [ "An Efficient Easily Adaptable System for Interpreting Natural Language Queries", "Logic for problem solving", "Extraposition Grammars", "Chart Parsing and Rule Schemata in PSG", "An Improved Context-Free Recognizer", "The Semantics of Predicate Logic as a Programming Language", "An efficient context-free parsing algorithm", "A Functional Grammar", "Slot Grammars", "Ttle sharing of structure in theorem proving programs", "The Theory of Parsing, Translation, and Compiling", "A Machine-Oriented Logic Based on the Resolution Principle" ], "abstract": [ "This paper gives an overall account of a prototype natural language question answering system, called Chat-80. Chat-80 has been designed to be both efficient and easily adaptable to a variety of applications. The system is implemented entirely in Prolog, a programming language based on logic. With the aid of a logic-based grammar formalism called extraposition grammars, Chat-80 translates English questions into the Prolog subset of logic. The resulting logical expression is then transformed by a planning algorithm into efficient Prolog, cf. \"query optimisation\" in a relational database. Finally, the Prolog form is executed to yield the answer. On a domain of world geography, most questions within the English subset are answered in well under one second, including relatively complex queries.", "This book investigates the application of logic to problem-solving and computer programming. It assumes no previous knowledge of these fields, and may be Karl duncker in addition to make difficult fill one of productive. The unifying epistemological virtues of program variables tuples in different terminologies he wants. Functional fixedness which appropriate solutions are most common barrier. Social psychologists over a goal is represented can take. There is often largely unintuitive and, all be overcome standardized procedures like copies? Functional fixedness it can be made possible for certain fields looks. In the solution paths or pencil. After toiling over the ultimate mentions that people cling rigidly to strain on. Luckily the book for knowledge of atomic sentences or fundamental skills. Functional fixedness is a problem solving techniques such.", "Extraposition grammars are an extension of definite clause grammars, and are similarly defined in terms of logic clauses. The extended formalism makes it easy to describe left extraposition of constituents, an important feature of natural language syntax.", "In this paper I want to describe how I have used MCHART in beginning to construct a parser for gr-mm-rs expressed in PSG, and how aspects of the chart parsing approach in general and MCHART in particular have made it easy to acco~mmodate two significant aspects of PSG: rule schemata involving variables over categories; and compound category symbols (\"slash\" categories). To do this I will briefly introduce the basic ideas of chart parsing; describe the salient aspects of MEHART; give an overview of PSG; and finally present the interesting aspects of the parser I am building for PSG using MCHART. Limitations of space, time, and will mean that all of these sections will be brief and sketchy I hope to produce a much expanded version at a later date.", "A new algorithm for recognizing and parsing arbitrary context-free languages is presented, and several new results are given on the computational complexity of these problems. The new algorithm is of both practical and theoretical interest. It is conceptually simple and allows a variety of efficient implementations, which are worked out in detail. Two versions are given which run in faster than cubic time. Surprisingly close connections between the Cocke-Kasami-Younger and Earley algorithms are established which reveal that the two algorithms are “almost” identical.", "Sentences in first-order predicate logic can be usefully interpreted as programs. In this paper the operational and fixpoint semantics of predicate logic programs are defined, and the connections with the proof theory and model theory of logic are investigated. It is concluded that operational semantics is a part of proof theory and that fixpoint semantics is a special case of model-theoretic semantics.", "A parsing algorithm which seems to be the most efficient general context-free algorithm known is described. It is similar to both Knuth's LR(k) algorithm and the familiar top-down algorithm. It has a time bound proportional to n3 (where n is the length of the string being parsed) in general; it has an n2 bound for unambiguous grammars; and it runs in linear time on a large class of grammars, which seems to include most practical context-free programming language grammars. In an empirical comparison it appears to be superior to the top-down and bottom-up algorithms studied by Griffiths and Petrick.", "Functional Grammar describes grammar in functional terms in which a language is interpreted as a system of meanings. The language system consists of three macro-functions known as meta-functional components: the interpersonal function, the ideational function, and the textual function, all of which make a contribution to the structure of a text. The concepts discussed in Functional Grammar aims at giving contribution to the understanding of a text and evaluation of a text, which can be applied for text analysis. Using the concepts in Functional Grammar, English teachers may help the students learn how various grammatical features and grammatical systems are used in written texts so that they can read and write better.", "This paper presents an approach to natural language grammars and parsing in which slots and rules for filling them play a major role. The system described provides a natural way of handling a wide variety of grammatical phenomena, such as WH-movement, verb dependencies, and agreement.", "We describe how clauses in resolution programs can be represented and used Without applying substitutions or cons-ing lists of literals. The amount of space required by our representation of a clause is independent of the number of literals in the clause and the depth of function nesting. We introduce the concept of the value of an expression in a binding environment which we use to standardize clauses apart and share the structure of parents in representing the resolvent. We present unification and resolution algorithms for our representation. Some data comparing our representation to more conventional ones is given.", "From volume 1 Preface (See Front Matter for full Preface) \n \nThis book is intended for a one or two semester course in compiling theory at the senior or graduate level. It is a theoretically oriented treatment of a practical subject. Our motivation for making it so is threefold. \n \n(1) In an area as rapidly changing as Computer Science, sound pedagogy demands that courses emphasize ideas, rather than implementation details. It is our hope that the algorithms and concepts presented in this book will survive the next generation of computers and programming languages, and that at least some of them will be applicable to fields other than compiler writing. \n \n(2) Compiler writing has progressed to the point where many portions of a compiler can be isolated and subjected to design optimization. It is important that appropriate mathematical tools be available to the person attempting this optimization. \n \n(3) Some of the most useful and most efficient compiler algorithms, e.g. LR(k) parsing, require a good deal of mathematical background for full understanding. We expect, therefore, that a good theoretical background will become essential for the compiler designer. \n \nWhile we have not omitted difficult theorems that are relevant to compiling, we have tried to make the book as readable as possible. Numerous examples are given, each based on a small grammar, rather than on the large grammars encountered in practice. It is hoped that these examples are sufficient to illustrate the basic ideas, even in cases where the theoretical developments are difficult to follow in isolation. \n \nFrom volume 2 Preface (See Front Matter for full Preface) \n \nCompiler design is one of the first major areas of systems programming for which a strong theoretical foundation is becoming available. Volume I of The Theory of Parsing, Translation, and Compiling developed the relevant parts of mathematics and language theory for this foundation and developed the principal methods of fast syntactic analysis. Volume II is a continuation of Volume I, but except for Chapters 7 and 8 it is oriented towards the nonsyntactic aspects of compiler design. \n \nThe treatment of the material in Volume II is much the same as in Volume I, although proofs have become a little more sketchy. We have tried to make the discussion as readable as possible by providing numerous examples, each illustrating one or two concepts. \n \nSince the text emphasizes concepts rather than language or machine details, a programming laboratory should accompany a course based on this book, so that a student can develop some facility in applying the concepts discussed to practical problems. The programming exercises appearing at the ends of sections can be used as recommended projects in such a laboratory. Part of the laboratory course should discuss the code to be generated for such programming language constructs as recursion, parameter passing, subroutine linkages, array references, loops, and so forth.", ":tb.~tract. Theorem-proving on the computer, using procedures based on the fund~mental theorem of Herbrand concerning the first-order predicate etdeulus, is examined with ~ view towards improving the efticieney and widening the range of practical applicability of these procedures. A elose analysis of the process of substitution (of terms for variables), and the process of t ruth-funct ional analysis of the results of such substitutions, reveals that both processes can be combined into a single new process (called resolution), i terating which is vastty more ef[ieient than the older cyclic procedures consisting of substitution stages alternating with truth-functional analysis stages. The theory of the resolution process is presented in the form of a system of first<~rder logic with .just one inference principle (the resolution principle). The completeness of the system is proved; the simplest proof-procedure based oil the system is then the direct implementation of the proof of completeness. Howew~r, this procedure is quite inefficient, ~nd the paper concludes with a discussion of several principles (called search principles) which are applicable to the design of efficient proof-procedures employing resolution as the basle logical process." ], "authors": [ { "name": [ "D. Warren", "Fernando C Pereira" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Kowalski" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Fernando C Pereira" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "H. Thompson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Graham", "M. Harrison", "W. L. Ruzzo" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. H. Emden", "R. Kowalski" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Earley" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "H. B. Allen", "M. Bryant" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Michael C. McCord" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. BoyerRoger", "J. S. Moore" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "A. Aho", "J. Ullman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. A. Robinson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "2498523", "5285557", "12928004", "15974189", "1468978", "11048276", "35664", "150098969", "6973469", "7838686", "60775129", "14389185" ], "intents": [ [], [], [], [ "background" ], [], [], [], [], [], [], [], [ "background" ] ], "isInfluential": [ false, false, false, false, false, false, false, false, false, false, false, false ] }
Problem: The paper aims to explore the relationship between parsing and deduction, specifically focusing on the connection between parsing algorithms and proof procedures for definite clauses. Solution: The hypothesis is that by connecting parsing with deduction through the Earley Deduction proof procedure, a more general view of chart parsing can be achieved, which can handle grammar formalisms based on unification efficiently and effectively. This approach is expected to provide a theoretically clean mechanism for semantic interpretation, handle gaps and unbounded dependencies seamlessly, and offer insights into parsing complexity for related formalisms like lexical-functional grammar (LFG).
500
0.73
null
null
null
null
null
null
null
null
3812d7df0a36cb2dce33c777724a90815ac8e7c8
11458028
null
An Approach to Natural Language in the {SI-N}ets Paradigm
Thls article deals with the interpretation of conceptual operations underlying the communicative use of natural language (NL) within the Structured Inheritance Network (Sl-Nets) paradigm. The operations are reduced to functions of a fo~al language, thus changing the level of abstraction of the operations to be performed on SI-Nets. In this sense, operations on SI-Nets are not merely isomorphic to single epistemologleal objects, but can be viewed as a simulation of processes on a different level, that pertaining to the conceptual system of NL. For this purpose, we have designed a version of KL-ONE which represents
{ "name": [ "Cappelli, Amedeo and", "Moretti, Lorenzo" ], "affiliation": [ null, null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
21
3
null
KL-Conc would seem to be a more natural and intuitive way of interacting with SI-Nets.The goal of our work is to interpret conceptual operations underlying the communicative use of natural language within the Structured Inheritance Networks (SI-Nets) paradigm.In other words, this means using eplstemological primitives such as Concepts, Roles and Structural Descriptions (Brachman, 1979) , to represent these conceptual operations.On the one hand, epistemological formalism, which is explicit and clear, can clarify the behaviour of conceptual operations of NL. lSy the use of SI-Nets formalism as a means of description, a new perspective can be brought out, since this formalism makes it possible to represent objects as data types structured in a complex way instead of considering them as mezR atomic elements. This feature Is likely to change the nature of the operations to be carried out on objects thus leading us to deal with the complexity of many phenomena in a more adequate way.On the other hand, this can lead to an investigation of the relationships between the conceptual aspects of NL and the epistemological primitives, in order to discover how the latter are used by the previously taentioned operations. In fact, we attempt to find out whether an isomorphism exists between objects and operations of NL and those used by epistemology.According to Brachman (1979) , five different approaches to the representational problem can be established:implementatlonal, logical, eplstemological, conceptual and linguistic. Each of them uses its own primitives so that the five levels can be interpreted as a hierarchy where each level involves different degrees of abstraction.By virtue of this interpretation, we have tried to extend epistemology in a conceptual perspective.Our current approach considers epistemology as a starting point, thus looking at the conceptual level as one of the possible target points.This goal can be achieved by changing the level of abstraction of the operations to be performed on SI-Nets. Consequently, operations on SI-Nets could assume a different aspect, that is to say they could be viewed not as merely isomorphic to single eplstemologieal objects but as a simulation of operations lylng on a different level, for instauce, that pertaining to the conceptual system of NL.This hypothesis can reduce SI-Nets to the level of an internal mechanism covering only abstract data representation, whose structure is not transparent to the user. In this case tile user interacts with the internal system by means of a separate external framework.In order to achieve this goal we have designed and i~Iplemented a language, KL-Hagma which represents our epistemologlcal level. We are now designing and Implementing an experimental language, KL-Co nc, which should cover the conceptual level and ~lhlch uses KL-itaglaa as one of its internal co,mpo.ents.The rest of the article will be devoted to a description of these two languages introducing considerations concerning their relevance to linguistic analysis and knowledge representation. We are confident that our approach can have interesting Ir,~plications for both these fields,Since KL-Cone .functions can be used to describe linguistic entities in te~s of conceptual operations and may be viewed as a more natural way of interacting with SI-Nets.KL-MAGMA KL-Magma is a version of KL-ONE implemented In MACt~-Lisp (Aslrelll et al. 1975) .It is a f~ language similar to the one described in Brach~nan (1979 ), Brac1~an et al. (1978 , which also takes into account the versions given in Cappelll and Morettl (19S2) and Porta and Vlnchesl (1982) .As In our current approach KL-Hsgma is mainly used as a declarative model of abstract data structures.It has no mechanism like the MSS Algorithm (Needs, 1981) or the KL-One Classifier (Sclmolze and Lipkis, 1983) which cover procedural aspects lying within epistemology, thus reaching valuable results in discovering new types of logic by deeply exploiting SI-Nets semantics. Instead, we have tried to discover types of procedurallty external to the eplstemological level and pertinent to the level we intend to represent. At any rate, we intend to govern epistemological processes by the external mechanism.In other words, this means assuming, for instance, the logic of subsumptlon , which is peculiar to epistemology, not as an autonomous deductive mechanism, but, instead, as a possible process controlled by the functions of the higher level language.WIIAT TYPE OF CONCEPTUAL OPERATIONS ?The conceptual operations of NL we intend to interpret are, for instance, Indlvldustions of objects, evaluations of objects, evaluations of properties of objects, evaluations of configurations of objects and so on.Operations of this kind are trlggered by articles, adjectives, prepositional phrases, relative clauses and so on. These operations, already intuitively described in classical Linguistics, have been given more attention by investigations based on Logic.In the logic paradigm they can be viewed as classes, Sets, predicates etc..opinion, the nature of these operations and, consequently, the description we intend to give of them, are not completely covered by logical analysis. Interesting results have been obtained by combining traditional logical systems with extensions of lambda calculi (~Jebber, 1978; 1981) .However, the types of complex procedurality peculiar to the operations have no~ yet been given a precise description; that is to say, procedurality has not been reduced to definite sets of restricted and clear procedures.Let us now introduce an example. The Italian definite and indefinite articles (il, un) can be described as follows: a) indlvlduatlon of a specific object; b) indlviduation of any one object; c) reference to an abstract prototype.In terms of logical description a) and b) may correspond to the iota operator and the existential quantifier of Logic; c) is similar to the universal quantifier even if the notion of a prototype is different, since it has an intenslonal nature.However, we think that tlle three possible descriptions of Italian articles may include types of operations not covered by the use of the above mentioned logical operators.The article, like many other linguistic entities, integrates different kinds of operations which, at the same time, manipulate descriptions of prototypes and individuals, search into different kinds of memory, etc.introduce a new example. The adjective is one of the more conlplex phenomena of NL which cannot be reduced to the notion of predicate since it triggers a set of reasoning processes, that is to say, the manipulation of parts of knowledge.I. un bambino rosso may be interpreted as: a child has hair, hair has a color, the color can be red. This NP cannot be literally translated into English without adding more information; the appropriate translation is : a red-halted child.In terms of SI-Nots this process can be represented as shown in figure I, assuming that every lexlcal item of the NP has its own intensional representation.However, the adjective does not specify all the steps of the reasoning process that it triggers, but only indicates, together with the name, the two extreme points of the chain leaving the intermediate undefined.The entire process, using generic knowledge as the reference point, is shown in Figure 2 . It would be oversimplifyiug, as stated above, to use the notion of predicate to interpret this complex process as well as the other possible interpretation of the adjective: the one corresponding to tile notion of "type of" aQ in the NP "a red color" (see fl~ure 3).This type of phenomena can be investigated by deeply exploiting the structure and the semantics of SI-Nets. The structure of a role can be used as configuration of objects which are likely to be manipulated by complex processes not yet deeply investigated from any other viewpoint than the eplstemologlcal one. Once considered as a complex llnk, as it actually is, a role may be the locus where different processes can be triggered. It may be used simply to satisfy a structure of another role lying higher within the network or to trigger the complex processes we were talking about. The two behavlours mentioned exhibit different levels of abstraction; in the former case this means performing eplstemologlcal operations, while in the latter we simulate processes of a conceptual system used by NL.The question now arises whether it is possible to reduce these types of operations to a set of functions of a formal language each of which covers a well defined process which corresponds to a well defined set of operations on SI-Nets -to a set of KL-Magma functions.The choice of a new language has many motivations: a) from the conceptual viewpoint, this means reducing operations to functions that are well defined from a semantic viet~polnt which lend clearness to tile process to be represented. a) from the epistemological viewpoint, it is reasonable to think that a language, such as KL-Magma, may be extended by another language thus achieving a higher degree of abstraction. c) a language is a uniform mechanism for the integration of interpreters of several symbolic processes. This integration is likely to bring out more clearly relevant phenomena of the process represented.On the basis of the linguistic assumptions previously outlincd and using KL-Hagma as a language which handles SI-Nets, we are now designing and implementing an experimental, language, KL-ConL, whose functions try to simulate the conceptual operations previously described.Before describing KL-Conc functions in detail, it is worth while discussing its internal organization.In the framework of KL-oNE, a relevant distinction has been drawn between the Terminological Box (T-Box) and the Assertional Box (A-Box) (Brachman, 1981). The T-~ox maintains the detailed description of the objects while the A-Box contains the set of the assertions on the objects.The former corresponds to the ability of describing by the use of NPs, and the latter to that of constructing complex sentences.A discussion has arisen whether it Is possible to handle the two boxes, which correspond to two different areas of memory, using the same language.In KL-O~E, new functions have been added in order to glve it an assert~onal power (nexus, context) (Woods 1979).A recent extellsion of KL-O~IE (Brachmsn et al., 1983) has adopted the solution of cresting two distinct languages: one for the T-Box and the other for the A-Box.The fon~er Is a sort of KL-ONE viewed in a functional way while the latter Is a language based on First Order Predicate Logic.to handle the T-Box and it has no assertional power. Instead, by KL-Cone we are trying to design a language which covers both terminological and assertional aspects, even if it is more biased towards assertlonallty. It is our intention to handle the T-Box mainly in an assertlonal way.In order to achieve thls goal we have introduced the distinction between Long Term Hemory (L~I) and ~orklng |~emory (~,I) which in part covers the traditional one between T-Box and A-Box.in EL-Magma data structures; this contains descriptional knowledge about generic and individual objects.The W~! contains the history of the objects organized in a structured way. This is the central component of our current hypothesis. The |;H contains the traces of contextual relationships between objects, as well as operations triggered on and by objects; it can also contain other symbolic systems. The task of the ~JM is mainly to hold hypotheses to be mapped onto the LT~! which requires the cooperation of several interpreters.The Introduction of a larger number of memory spaces increases the power of the language. For instance, a structured WU is likely to improve the number of s~nbolic systems interacting with one another. This makes It possible to insert into the language functions based on different processes. Taking for instance the history of the objects as a reference point, the objects themselves can be accessed according as they appear in the time flow. The function:<LAST arbitrary_name> returns the last object, created or manipulated, belonging to the class named by arbitrary n~ze. In other words, this allows the user to refer to objects using anaphorical references, that is to say using a s~nbolic system which is organized and represented in a different way from epistemology.By the WM we are trying to create the basic mechanism to handle these types of processes.KL-Conc: External Organization KL-Conc functions handle real world objects, so the user only needs to know a set of functions to be applied to objects. In this way, the structure of the Sl-?let which internally organizes the data, is hidden; the only t~ta which are transparent are objects, which may be individual or generic, together with syntactic rules for combining functions. These last are very flexible. Objects can be accessed using arbitrary names or by means of syntactic combinations which conceptually correspond to complex tests on the nature of objects, the configuration of objects etc.. Objects can be accessed according as they appear in the time flow.The user can use the same name both for generic and individual objects. This is made possible by means of an internal generator of names which, starting from the name of a generic object, provides any individual of that class with a different name. This feature covers the part of the naming system of NL which uses the same name for individuals and prototypes.This does not cover the use of proper names which has been taken in JARGON (Woods, 1979) as the only means for naming individuals, thus oversimplifying the real system used by NL (Mark, 1981) .Objects can be accessed without the use of names, but by means of functions or combinations of functions in order to perfo~ complex tests on the nature of objects. This means referring to objects by testing properties or configurations. (Create_Concept X1Jndlvidual) ((Hot (Generlc_Concept_P X) (Create Concept X" generic)) (Establish as Individuator X1 X)This is one of the most "declarative" functions since it creates a new individual concept without searching in the LTN. In other words, tile user must be conscious that the new object is added to the L lqq and it is different from all the other objects. A more psychological oriented behavlour would require to test in advance the nature of the new object in order to decide whether the object is similar to or coincides with an individual object already inserted into tile LTH. The salute problem has been overcome in KRYPTO.~: by means of tile swltcb TELL/ASK (Brachman et al., 19P3) .<JUSTOr!E arbitrary name> verifies whether there exists a unique individual either named by arbltrary, name or defined by tests or combinations of tests according to KL-Conc syntax. In other words, this means verifying if the object is unique as to its name, or as to one of its properties etc.The KL-Conc expressions for the two meanings are, respectively: (JUSTO~E table) (JUSTONE (TESI~PROPERTY table red)) This function has a complex behavlour, since, intuitively, it must verify the unlqueness of an object and must return: i) the individual if unique; ll) the llst of individuals if more than one satisfies the conditions .given by assertions; Ill) NIL if no invlvldual exists satisfying the conditions (Carnap, 1947) . The three answers have different meanln~s, since they imply different operations to be triggered on the memory spaces or, at any rate, they have different effects on the behavlour of functions where JUSTONE can be nested.The function:<TEST CONFIGURATION OF PROPERTIES ~rbltrary_namel arbltrary_name2>verifies whether arbitrary name2 exists in the horizontal chain . of r~les starting from arbitrary_namel (see Figure 4 )1 • • L°QO * Figure 4BY the function:<ADD_PROPERTY arbltrary__namel arbltrary_name2)we intend to add roles to concepts so that the user needs not have any specific kno~Jledge about the distinction between generic and instance roles or, seen from a different viewpoint, between properties of prototypes and properties of individuals.Taking NL as the reference point, we think that the above mentioned distinction is peculiar only to certain linguistic elements; in the case of operations on propertles, no distinction is made; it is the conceptual operations governing the operations on properties that control the correct application of the adding or testing properties.Consequently, the function ADD PROPERTY must be designed In order to make it pos~Ible to trigger the correct procedures depending on the type of objects which it is applied to. For this purpose, we intend to use a metarepresentation of KL-Hagma (Cappelli et al., 1983) which, on detecting tile type of object, automatically apply the appropriate procedures. This implies a system which creates or tests knowledge structures interpreting its own syntax. Let's now briefly describe two possible behavlours of this function.Wtlen applied to individual concepts, thls creates a new instance role establishing it as a satisfler of a higher generic role of the generic concept ancestor of the individual concept.If a possible generic role does not exist it is created without inserting any V/R in the generic role, since it could be a more general concept than the generic concept ancestor of the value of the newly created instance role. The structures created by this function are shown in figure 5 by dotted lines."~0// I The functions described in this article represent only a subset of the operations which can be embodied in tile language.In this sense, the number of KL-Conc functions is likely to be increased in order to cover new processes.So far, we have designed the functions for those operations which exhibit the same behaviour whatever domain they are applied to, since they represent the "deep" behavlour of syntactic elements.It is to be emphasized that we have tried to reduce to the fomn of functions of a language, all the operations of NL which are domaln-lndependent and which represent aspects of the abstract syntactic ability of structuring knowledge facts (Cappelll etal., 1983; Cappelll and Moretti, 1983) Using KL-Conc it is possible to investigate how linguistic elements can be described in temns of conceptual operations. This is a further step towards the linguistic level. On reaching this level, the task will be to discover how the conceptual operations are embodied in linguistic forms.The previously mentioned Italian articles may be described as follows: Figure 5 ~en applied to generlc concepts, the function adds a new generic role, trying to link it with a higher generic role. If no generic role is found, a higher generic role is created without providing it with any information other than the one inferred from the structure of the newly created subrole.Some conclusions may now be drawn both from a linguistic sad a knowledge representation viewpoint.From a linguistic viewpoint some relevant facts must be pointed out.the level of integration reached by the construction of a uniform language, can bring out more clearly the nature of many phenomena of [;L, since it is possible to put together many processes which cooperatively contribute to the realization of a single phenomenon. This means looking at the complexity of HL with the aid of a powerful symbolic Instr~nent, capable of handling contemporaneously several aspects of that complexity, thus reaching a higher degree of adequacy. In designing KL-Conc, we aim to create a framework which can extend the possibility of investigating and representing these phenomena. From a knowledge representation viewpoint KL-Conc would seela to be a means for interacting with SI-Nets in an intuitive way. The user is not required to have a specific knowledge of Sl-Nets fo~iallsm;he only needs to know a set of functions to be applied to objects.In this sense KL-Conc assumes a more natural aspect, thus overcoming the constraint of a structure-orlented language such as KL-Ha~zLa. This feature has been obtained by handling Sl-~ets in a more compact way.KL-Conc provides the user with a set of functions which are not isomorphic to single eplstemologlcal objects but which handle pieces of network starting from discontinuous information.This weakness, peculiar to NL, is made possible in KL-Conc by assuming the epistemologlcal level as a reference schema, instead of a reductlonlst for,uallsm. This means introducing mechanisms for relaxing the rules of KL-Ha~sa. In this way KL-Conc can be seen as a "constructive" system (in the sense of Korner 1970) which manipulates its "'factual" system (KL-Magma) in an intultlonistic way.Finally, KL-Conc suggests a different way of exploiting spreading activation mechanisms (Quilllan, 1968) using several symbollc systems
null
null
null
null
Main paper: : KL-Conc would seem to be a more natural and intuitive way of interacting with SI-Nets.The goal of our work is to interpret conceptual operations underlying the communicative use of natural language within the Structured Inheritance Networks (SI-Nets) paradigm.In other words, this means using eplstemological primitives such as Concepts, Roles and Structural Descriptions (Brachman, 1979) , to represent these conceptual operations.On the one hand, epistemological formalism, which is explicit and clear, can clarify the behaviour of conceptual operations of NL. lSy the use of SI-Nets formalism as a means of description, a new perspective can be brought out, since this formalism makes it possible to represent objects as data types structured in a complex way instead of considering them as mezR atomic elements. This feature Is likely to change the nature of the operations to be carried out on objects thus leading us to deal with the complexity of many phenomena in a more adequate way.On the other hand, this can lead to an investigation of the relationships between the conceptual aspects of NL and the epistemological primitives, in order to discover how the latter are used by the previously taentioned operations. In fact, we attempt to find out whether an isomorphism exists between objects and operations of NL and those used by epistemology.According to Brachman (1979) , five different approaches to the representational problem can be established:implementatlonal, logical, eplstemological, conceptual and linguistic. Each of them uses its own primitives so that the five levels can be interpreted as a hierarchy where each level involves different degrees of abstraction.By virtue of this interpretation, we have tried to extend epistemology in a conceptual perspective.Our current approach considers epistemology as a starting point, thus looking at the conceptual level as one of the possible target points.This goal can be achieved by changing the level of abstraction of the operations to be performed on SI-Nets. Consequently, operations on SI-Nets could assume a different aspect, that is to say they could be viewed not as merely isomorphic to single eplstemologieal objects but as a simulation of operations lylng on a different level, for instauce, that pertaining to the conceptual system of NL.This hypothesis can reduce SI-Nets to the level of an internal mechanism covering only abstract data representation, whose structure is not transparent to the user. In this case tile user interacts with the internal system by means of a separate external framework.In order to achieve this goal we have designed and i~Iplemented a language, KL-Hagma which represents our epistemologlcal level. We are now designing and Implementing an experimental language, KL-Co nc, which should cover the conceptual level and ~lhlch uses KL-itaglaa as one of its internal co,mpo.ents.The rest of the article will be devoted to a description of these two languages introducing considerations concerning their relevance to linguistic analysis and knowledge representation. We are confident that our approach can have interesting Ir,~plications for both these fields,Since KL-Cone .functions can be used to describe linguistic entities in te~s of conceptual operations and may be viewed as a more natural way of interacting with SI-Nets.KL-MAGMA KL-Magma is a version of KL-ONE implemented In MACt~-Lisp (Aslrelll et al. 1975) .It is a f~ language similar to the one described in Brach~nan (1979 ), Brac1~an et al. (1978 , which also takes into account the versions given in Cappelll and Morettl (19S2) and Porta and Vlnchesl (1982) .As In our current approach KL-Hsgma is mainly used as a declarative model of abstract data structures.It has no mechanism like the MSS Algorithm (Needs, 1981) or the KL-One Classifier (Sclmolze and Lipkis, 1983) which cover procedural aspects lying within epistemology, thus reaching valuable results in discovering new types of logic by deeply exploiting SI-Nets semantics. Instead, we have tried to discover types of procedurallty external to the eplstemological level and pertinent to the level we intend to represent. At any rate, we intend to govern epistemological processes by the external mechanism.In other words, this means assuming, for instance, the logic of subsumptlon , which is peculiar to epistemology, not as an autonomous deductive mechanism, but, instead, as a possible process controlled by the functions of the higher level language.WIIAT TYPE OF CONCEPTUAL OPERATIONS ?The conceptual operations of NL we intend to interpret are, for instance, Indlvldustions of objects, evaluations of objects, evaluations of properties of objects, evaluations of configurations of objects and so on.Operations of this kind are trlggered by articles, adjectives, prepositional phrases, relative clauses and so on. These operations, already intuitively described in classical Linguistics, have been given more attention by investigations based on Logic.In the logic paradigm they can be viewed as classes, Sets, predicates etc..opinion, the nature of these operations and, consequently, the description we intend to give of them, are not completely covered by logical analysis. Interesting results have been obtained by combining traditional logical systems with extensions of lambda calculi (~Jebber, 1978; 1981) .However, the types of complex procedurality peculiar to the operations have no~ yet been given a precise description; that is to say, procedurality has not been reduced to definite sets of restricted and clear procedures.Let us now introduce an example. The Italian definite and indefinite articles (il, un) can be described as follows: a) indlvlduatlon of a specific object; b) indlviduation of any one object; c) reference to an abstract prototype.In terms of logical description a) and b) may correspond to the iota operator and the existential quantifier of Logic; c) is similar to the universal quantifier even if the notion of a prototype is different, since it has an intenslonal nature.However, we think that tlle three possible descriptions of Italian articles may include types of operations not covered by the use of the above mentioned logical operators.The article, like many other linguistic entities, integrates different kinds of operations which, at the same time, manipulate descriptions of prototypes and individuals, search into different kinds of memory, etc.introduce a new example. The adjective is one of the more conlplex phenomena of NL which cannot be reduced to the notion of predicate since it triggers a set of reasoning processes, that is to say, the manipulation of parts of knowledge.I. un bambino rosso may be interpreted as: a child has hair, hair has a color, the color can be red. This NP cannot be literally translated into English without adding more information; the appropriate translation is : a red-halted child.In terms of SI-Nots this process can be represented as shown in figure I, assuming that every lexlcal item of the NP has its own intensional representation.However, the adjective does not specify all the steps of the reasoning process that it triggers, but only indicates, together with the name, the two extreme points of the chain leaving the intermediate undefined.The entire process, using generic knowledge as the reference point, is shown in Figure 2 . It would be oversimplifyiug, as stated above, to use the notion of predicate to interpret this complex process as well as the other possible interpretation of the adjective: the one corresponding to tile notion of "type of" aQ in the NP "a red color" (see fl~ure 3).This type of phenomena can be investigated by deeply exploiting the structure and the semantics of SI-Nets. The structure of a role can be used as configuration of objects which are likely to be manipulated by complex processes not yet deeply investigated from any other viewpoint than the eplstemologlcal one. Once considered as a complex llnk, as it actually is, a role may be the locus where different processes can be triggered. It may be used simply to satisfy a structure of another role lying higher within the network or to trigger the complex processes we were talking about. The two behavlours mentioned exhibit different levels of abstraction; in the former case this means performing eplstemologlcal operations, while in the latter we simulate processes of a conceptual system used by NL.The question now arises whether it is possible to reduce these types of operations to a set of functions of a formal language each of which covers a well defined process which corresponds to a well defined set of operations on SI-Nets -to a set of KL-Magma functions.The choice of a new language has many motivations: a) from the conceptual viewpoint, this means reducing operations to functions that are well defined from a semantic viet~polnt which lend clearness to tile process to be represented. a) from the epistemological viewpoint, it is reasonable to think that a language, such as KL-Magma, may be extended by another language thus achieving a higher degree of abstraction. c) a language is a uniform mechanism for the integration of interpreters of several symbolic processes. This integration is likely to bring out more clearly relevant phenomena of the process represented.On the basis of the linguistic assumptions previously outlincd and using KL-Hagma as a language which handles SI-Nets, we are now designing and implementing an experimental, language, KL-ConL, whose functions try to simulate the conceptual operations previously described.Before describing KL-Conc functions in detail, it is worth while discussing its internal organization.In the framework of KL-oNE, a relevant distinction has been drawn between the Terminological Box (T-Box) and the Assertional Box (A-Box) (Brachman, 1981). The T-~ox maintains the detailed description of the objects while the A-Box contains the set of the assertions on the objects.The former corresponds to the ability of describing by the use of NPs, and the latter to that of constructing complex sentences.A discussion has arisen whether it Is possible to handle the two boxes, which correspond to two different areas of memory, using the same language.In KL-O~E, new functions have been added in order to glve it an assert~onal power (nexus, context) (Woods 1979).A recent extellsion of KL-O~IE (Brachmsn et al., 1983) has adopted the solution of cresting two distinct languages: one for the T-Box and the other for the A-Box.The fon~er Is a sort of KL-ONE viewed in a functional way while the latter Is a language based on First Order Predicate Logic.to handle the T-Box and it has no assertional power. Instead, by KL-Cone we are trying to design a language which covers both terminological and assertional aspects, even if it is more biased towards assertlonallty. It is our intention to handle the T-Box mainly in an assertlonal way.In order to achieve thls goal we have introduced the distinction between Long Term Hemory (L~I) and ~orklng |~emory (~,I) which in part covers the traditional one between T-Box and A-Box.in EL-Magma data structures; this contains descriptional knowledge about generic and individual objects.The W~! contains the history of the objects organized in a structured way. This is the central component of our current hypothesis. The |;H contains the traces of contextual relationships between objects, as well as operations triggered on and by objects; it can also contain other symbolic systems. The task of the ~JM is mainly to hold hypotheses to be mapped onto the LT~! which requires the cooperation of several interpreters.The Introduction of a larger number of memory spaces increases the power of the language. For instance, a structured WU is likely to improve the number of s~nbolic systems interacting with one another. This makes It possible to insert into the language functions based on different processes. Taking for instance the history of the objects as a reference point, the objects themselves can be accessed according as they appear in the time flow. The function:<LAST arbitrary_name> returns the last object, created or manipulated, belonging to the class named by arbitrary n~ze. In other words, this allows the user to refer to objects using anaphorical references, that is to say using a s~nbolic system which is organized and represented in a different way from epistemology.By the WM we are trying to create the basic mechanism to handle these types of processes.KL-Conc: External Organization KL-Conc functions handle real world objects, so the user only needs to know a set of functions to be applied to objects. In this way, the structure of the Sl-?let which internally organizes the data, is hidden; the only t~ta which are transparent are objects, which may be individual or generic, together with syntactic rules for combining functions. These last are very flexible. Objects can be accessed using arbitrary names or by means of syntactic combinations which conceptually correspond to complex tests on the nature of objects, the configuration of objects etc.. Objects can be accessed according as they appear in the time flow.The user can use the same name both for generic and individual objects. This is made possible by means of an internal generator of names which, starting from the name of a generic object, provides any individual of that class with a different name. This feature covers the part of the naming system of NL which uses the same name for individuals and prototypes.This does not cover the use of proper names which has been taken in JARGON (Woods, 1979) as the only means for naming individuals, thus oversimplifying the real system used by NL (Mark, 1981) .Objects can be accessed without the use of names, but by means of functions or combinations of functions in order to perfo~ complex tests on the nature of objects. This means referring to objects by testing properties or configurations. (Create_Concept X1Jndlvidual) ((Hot (Generlc_Concept_P X) (Create Concept X" generic)) (Establish as Individuator X1 X)This is one of the most "declarative" functions since it creates a new individual concept without searching in the LTN. In other words, tile user must be conscious that the new object is added to the L lqq and it is different from all the other objects. A more psychological oriented behavlour would require to test in advance the nature of the new object in order to decide whether the object is similar to or coincides with an individual object already inserted into tile LTH. The salute problem has been overcome in KRYPTO.~: by means of tile swltcb TELL/ASK (Brachman et al., 19P3) .<JUSTOr!E arbitrary name> verifies whether there exists a unique individual either named by arbltrary, name or defined by tests or combinations of tests according to KL-Conc syntax. In other words, this means verifying if the object is unique as to its name, or as to one of its properties etc.The KL-Conc expressions for the two meanings are, respectively: (JUSTO~E table) (JUSTONE (TESI~PROPERTY table red)) This function has a complex behavlour, since, intuitively, it must verify the unlqueness of an object and must return: i) the individual if unique; ll) the llst of individuals if more than one satisfies the conditions .given by assertions; Ill) NIL if no invlvldual exists satisfying the conditions (Carnap, 1947) . The three answers have different meanln~s, since they imply different operations to be triggered on the memory spaces or, at any rate, they have different effects on the behavlour of functions where JUSTONE can be nested.The function:<TEST CONFIGURATION OF PROPERTIES ~rbltrary_namel arbltrary_name2>verifies whether arbitrary name2 exists in the horizontal chain . of r~les starting from arbitrary_namel (see Figure 4 )1 • • L°QO * Figure 4BY the function:<ADD_PROPERTY arbltrary__namel arbltrary_name2)we intend to add roles to concepts so that the user needs not have any specific kno~Jledge about the distinction between generic and instance roles or, seen from a different viewpoint, between properties of prototypes and properties of individuals.Taking NL as the reference point, we think that the above mentioned distinction is peculiar only to certain linguistic elements; in the case of operations on propertles, no distinction is made; it is the conceptual operations governing the operations on properties that control the correct application of the adding or testing properties.Consequently, the function ADD PROPERTY must be designed In order to make it pos~Ible to trigger the correct procedures depending on the type of objects which it is applied to. For this purpose, we intend to use a metarepresentation of KL-Hagma (Cappelli et al., 1983) which, on detecting tile type of object, automatically apply the appropriate procedures. This implies a system which creates or tests knowledge structures interpreting its own syntax. Let's now briefly describe two possible behavlours of this function.Wtlen applied to individual concepts, thls creates a new instance role establishing it as a satisfler of a higher generic role of the generic concept ancestor of the individual concept.If a possible generic role does not exist it is created without inserting any V/R in the generic role, since it could be a more general concept than the generic concept ancestor of the value of the newly created instance role. The structures created by this function are shown in figure 5 by dotted lines."~0// I The functions described in this article represent only a subset of the operations which can be embodied in tile language.In this sense, the number of KL-Conc functions is likely to be increased in order to cover new processes.So far, we have designed the functions for those operations which exhibit the same behaviour whatever domain they are applied to, since they represent the "deep" behavlour of syntactic elements.It is to be emphasized that we have tried to reduce to the fomn of functions of a language, all the operations of NL which are domaln-lndependent and which represent aspects of the abstract syntactic ability of structuring knowledge facts (Cappelll etal., 1983; Cappelll and Moretti, 1983) Using KL-Conc it is possible to investigate how linguistic elements can be described in temns of conceptual operations. This is a further step towards the linguistic level. On reaching this level, the task will be to discover how the conceptual operations are embodied in linguistic forms.The previously mentioned Italian articles may be described as follows: Figure 5 ~en applied to generlc concepts, the function adds a new generic role, trying to link it with a higher generic role. If no generic role is found, a higher generic role is created without providing it with any information other than the one inferred from the structure of the newly created subrole.Some conclusions may now be drawn both from a linguistic sad a knowledge representation viewpoint.From a linguistic viewpoint some relevant facts must be pointed out.the level of integration reached by the construction of a uniform language, can bring out more clearly the nature of many phenomena of [;L, since it is possible to put together many processes which cooperatively contribute to the realization of a single phenomenon. This means looking at the complexity of HL with the aid of a powerful symbolic Instr~nent, capable of handling contemporaneously several aspects of that complexity, thus reaching a higher degree of adequacy. In designing KL-Conc, we aim to create a framework which can extend the possibility of investigating and representing these phenomena. From a knowledge representation viewpoint KL-Conc would seela to be a means for interacting with SI-Nets in an intuitive way. The user is not required to have a specific knowledge of Sl-Nets fo~iallsm;he only needs to know a set of functions to be applied to objects.In this sense KL-Conc assumes a more natural aspect, thus overcoming the constraint of a structure-orlented language such as KL-Ha~zLa. This feature has been obtained by handling Sl-~ets in a more compact way.KL-Conc provides the user with a set of functions which are not isomorphic to single eplstemologlcal objects but which handle pieces of network starting from discontinuous information.This weakness, peculiar to NL, is made possible in KL-Conc by assuming the epistemologlcal level as a reference schema, instead of a reductlonlst for,uallsm. This means introducing mechanisms for relaxing the rules of KL-Ha~sa. In this way KL-Conc can be seen as a "constructive" system (in the sense of Korner 1970) which manipulates its "'factual" system (KL-Magma) in an intultlonistic way.Finally, KL-Conc suggests a different way of exploiting spreading activation mechanisms (Quilllan, 1968) using several symbollc systems Appendix:
null
null
null
null
{ "paperhash": [ "brachman|krypton:_a_functional_approach_to_knowledge_representation", "cappelli|kl-conc:_a_language_for_interacting_with_si-nets", "schmolze|classification_in_the_kl-one_knowledge_representation_system", "schmolze|proceedings_of_the_1981_kl-one_workshop,", "woods|research_in_natural_language_understanding", "brachman|klone_reference_manual" ], "title": [ "Krypton: A Functional Approach to Knowledge Representation", "KL-Conc: A Language for Interacting With SI-Nets", "Classification in the KL-ONE Knowledge Representation System", "Proceedings of the 1981 KL-ONE Workshop,", "Research in Natural Language Understanding", "KLONE Reference Manual" ], "abstract": [ "A great deal of effort has focused on developing frame-based languages for knowledge representation. While the basic ideas of frame systems are straightforward, complications arise in their design and use. The authors have developed a design strategy for avoiding these types of problems and have implemented a representation system based on it. The system, called Krypton, clearly distinguishes between definitional and factual information. In particular, Krypton has two representation languages, one for forming descriptive terms and one for making statements about the world using these terms. Further, Krypton provides a functional view of a knowledge base, characterized in terms of what it can be asked or told, rather than in terms of the particular structures it uses to represent knowledge. 11 references.", "This paper introduces KL-Conc language, a Knowledge Representation Language based on KL-Magma, which is a version of KL-ONE. The aim of KL-Conc is to simulate conceptual operations underlying natural language. Relationships and differences between KL-Conc and KL-ONE are also discussed.", "KL-ONE lets one define and use a class of descriptive terms called Concepts, where each Concept denotes a set of objects A subsumption relation between Concepts is defined which is related to set inclusion by way of a semantics for Concepts. This subsumption relation defines a partial order on Concepts, and KL-ONE organizes all Concepts into a taxonomy that reflects this partial order. Classification is a process that takes a new Concept and determines other Concepts that either subsume it or that it subsumes, thereby determining the location for the new Concept within a given taxonomy. We discuss these issues and demonstrate some uses of the classification algorithm.", "Abstract : The Second KL-ONE Workshop gathered researchers from twenty-one universities and research institutions for a series of discussions and presentations about the KL-ONE knowledge representation language. These proceedings summarize the discussions and presentations, provide position papers from the participants, list the agendas of the Workshop along with the names and addresses of the participants, and include a description of the KL-ONE language plus an index of some KL-ONE technical terms. (Author)", "Abstract : The goals of the project are to develop techniques required for fluent and effective communication between a decision maker and an intelligent computerized display system in the context of complex decision tasks such as military command and control. This problem is approached as a natural language understanding problem, since most of the techniques required would still be necessary for an artificial language designed specifically for the task. Characteristics that are considered important for such communication are the ability for the user to omit details that can be inferred by the system and to express requests in a form that 'comes naturally' without extensive forethought or problem solving. These characteristics lead to the necessity for a language structure that mirrors the user's conceptual model of the task and the equivalents of anaphoric reference, ellipsis, and context-dependent interpretation of requests. these in turn lead to requirements for handling large data bases of general world knowledge to support the necessary inferences. The project is seeking to develop techniques for representing and using real world knowledge in this context, and for combining it efficiently with syntactic and semantic knowledge. This report discusses aspects of research to date and a general approach to definite anaphoric reference and near-deterministic parsing strategies.", "Abstract : KLONE is being developed to be an epistemologically-explicit language for representing conceptual knowledge and structured inheritance; this manual provides user documentation for the current state of the INTERLISP implementation. Documented are: types of KLONE entities and relationships; procedural and data attachment; conceptual meta-description of KLONE entities; implementation naming conventions; and all user-accessible KLONE primitives." ], "authors": [ { "name": [ "R. Brachman", "R. Fikes", "H. Levesque" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "A. Cappelli", "L. Moretti", "Carlo Vinchesi" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "James G. Schmolze", "Thomas A. Lipkis" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "James G. Schmolze", "R. Brachman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. Woods", "R. Brachman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Brachman", "E. Ciccarelli", "Norton Greenfeld", "Martin D. Yonke" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null ], "s2_corpus_id": [ "5995339", "17990569", "6876366", "60933710", "61138592", "59994273" ], "intents": [ [], [ "methodology" ], [], [], [], [] ], "isInfluential": [ false, true, false, false, false, false ] }
null
497
0.006036
null
null
null
null
null
null
null
null
59d31dd410550eb1f7c3d58767fc6da3eb116262
32583478
null
Iterative Operations
We present in this article, as a part of aspectual operation system, a generation system of iterative expressions using a set of operators called iterative operators. In order to execute the itera-
{ "name": [ "Yamada, Sae" ], "affiliation": [ null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
11
0
null
The iterative aspect is one of sentential aspect and denotes plural occurrence of an event or an action. The iterative aspect concerns therefore the property of countability.The iteratire operations give the iterative aspect to a proposition and are concerned with the plurality of occurrences of the event.As we distinguish count nouns (count terms) from non-count nouns (mass terms), we distinguish countable events from noncountable events, or more precisely, the events of which the number of occurrences is countable and those of which the number of occurrences is non-countable.As a count noun has a clear boundary, a countable event also has to have a clear boundary. Countable events are for instance: he opens a window; he reads a book; he kicks a ball etc. Non-countable events are for instance: he swims; he sleeps deeply; he runs fast,etc.Only a countable event can be repeated: he opens three windows; he kicked the ball twice,etc. A n~n-countable event can't be repeated: ~he sleeps twice.The distinction of two kinds of events (and of two kinds of propositions), which also is called telic-atelic, cyclicnon-cyclic or bounded-non bounded distinction" is therefore necessary for the execution of the iterative operations.It must be useful to give here some remarks on the terminology.The terms such as 'iterative', 'repetitive', 'frequentative' and 'multiplicatire' are used very often as synonyms. However there are some works which distinguish them one from the other The term repetitive is used sometimes to indicate only one repetition and the term iterative to indicate more than two repetitions. And sometimes the term iterative is used for one repetition and the term frequentative is used for several repetitions.We use both of the terms 'iterative' and 'repetitive'~ (hence 'iteration' and 'repetition'~as synonyms. In this article 'repetition' means, in most of cases, two or more occurrences of a same event. But in order to prevent a misunderstanding, we rather use the term 'iteration'.A 'proposition' denotes an event and it is a neutral expression in the sense that the tense, aspect and mode operators operate on it.
Two kinds of iterations are distinguished: regular and irregular iterations, i.e. the iterations which correspond to cardinal count adverbials and the iterations which correspond to frequency adverbials.A regular iteration is defined either by a regular frequency of the occurrence of the event, (called 'fixed frequency' by Stump) , or by a constant length of intervals between occurrences. The busses started at five-minute intervals.I These termes are used by Garey, Bull and Allen respectively.The extreme case of the regular iteration is called 'habitude'.(2) En ~t~, elle se levait ~ quatre heure s.A regular frequency or a constant interval is indicated by the operator F. An irregular iteration is indicated either with a number of occurrences of an event or with irregular lengths of intervals between occurrences.(3) Linda called you several times last night.Nous avons entendu le m~me bruit par intervalles.(Interval)Both the numerical indications and the indications of irregular intervals are given with the operator N.Considering the structure of a repeated event, we can distinguish several forms of repetitions, according as which constituent is affected. If we say,"She changes her dress several times a day", it is the object which is affected by the repetition.Using grammatical category-names we can indicate the repeated constituent as the following. On the actual stage we have no such a detailed mechanism to be able to differentiate the repeated constituent. Nor do we consider the differentiation necessary. We treat all these repetitions as having the type (Subj Pred)~,(in a more general form ~), and we find no inconvenience doing so.An event consists of several phases: the beginning, the middle, the end and eventually the result and the imminent phase, i.e. the phase directly preceding the beginning point.As for the repetition is concerned only a phaseincluding a culmination point is capable of repetition, because the repetition presuppos~ that the event has a (real or hypothetical) boundary. Like the distinction of the repeated constituent, the distinction of the repeated phase is not especially significative in the iterative operations. Besides, if necessary, we can treat each phase as an independent event: the beginning part ~' of the event ~ can be considered as an event. Thus, for the time being, the distinction of phases is also neglected in the iterative operations.Homogeneous iteration and heterogeneous iteration A homogeneous iteration is an ordinary iteration of the type(~)~ and a heterogeneous iteration is what is called by Imbs 'la r~p~tition d'alternance'. It is not the iteration of a simple event but the iteration of two or more mutually related events. It has the form: (~'÷~' '...)~ (7) J'allume et j'~teins une fois par minute.The most frequent case is the combination of two events, but the combination of three events is still possible: (8) Depuis une heure il va ~ la fen~tre tousles trois minutes, s'arr~te un moment et revient encore.The combination of more than three events is not natural.
We present in this article, as a part of aspectual operation system, a generation system of iterative expressions using a set of operators called iterative operators. In order to execute the iterative operations efficiently, we have classified previously propositions denoting a single occurrence of a single event into three groupes. The definition of a single event is given recursively. The classification has been carried out especially in consideration of the duratire / non-durative character of the denoted events and also in consideration of existence / non-existence of a culmination point (or a boundary) in the events. The operations concerned with iteration have either the effect of giving a boundary to an event ( in the case of a non-bounded event) or of extending an event through repetitions. The operators concerned are: N,F .. direct iterative operators; I,G .. boundary giving operators; I .. extending operator.There are direct and indirect operations: the direct ones change a non-repetitious proposition into a repetitious one directly, whereas the indirect ones change it indirectly. The indirect iteration is indicated with . The scope of each operator is not uniquely definable, though the mutual relation of the operators can be given more or less explicitly.The system of the iterative operations, which makes a part of aspectual operation system, is based on the assumption that the general mechanism of repetition is language independent and can be reduced to a small number of operations, though language expressions of repetition are different from language to language. It must be noticed that even in one language there are usually several means to express repetitious events. We know that "il lui cognait la t~te contre lemur" and "il lui a cogn~ deux ou trois fois la t~te contre lemur", the examples given by W. Pollak, express the same event.We have also linguistic means for iterative expressions on all linguistic levels: morphological, syntactical, semantic, pragmatic etc.As the general form of repetition we use ~ = (~i~ in which ~ is the whole event, ~ia single occurrence of a single event and* an iteration indicator. For example:: (a series of) explosions took place 93: a single explosion took place : indefinit number of times ~i denotes actually a proposition describing a single event S i. ~ sign will be replaced later by a singIe or complex operator or operators, which operate(s) on ~i-We hope also to be able to give various expressions to the same event and for that purpose we are planning to have a set of interpretation rules.The language mainly concerned is Japanese, but in this article examples are given in French, in English or in German.
In the present article we are exclusively concerned with aspect operators and tense operators are not treated, though past tense sentenses are used as examples. We will be contented just to say that tense operators come after aspect operators in the operation order. A sentential aspect is the sythesis of the aspectual meanings of all constituents of the sentence.For the efficient execution of iterative operations as well as all aspectual operations we have to classify previously propositions ~i denoting events S i. For this classification we take accoufit of durative/non-durative and bounded/nonbounded characters of events. This classification is necessary also for other aspectual operations. In order to show the varidity of the classification, we give an example of other aspectual operations: the inchoative operation. Inch is a boundary giving operator and gives the initial border to any proposition, but the meaning of Inch(@ i) is different according to @i-With ~[, which doesn't imply any boundarz Inch functions to give the initial boundary.With @o, which implies an end point, inch fiEes the initial boundary. ex. @2 "" Bob builds a sandcastle;Inch(@2) .. Bob began to build a sandcastle.The length of the event is the time stretch, at the end of which Bob is supposed to complete the sandcastle. With @3 the condition is quite different. ~3, momentaneous proposition, implies no length (or no meaningful length) and the beginning point and the end point overlap each other. Inch(~3) gives automatical]y the iteration of the event and the initial boundary becoms the initial boundary of the prolonged event. ex. @3 "" he knocks (one time) on the door; Inch(@3) .. He began knocking (repeatedly) on the door.The function of the Inch is the same for all of three examples, but the meaning of the beginning is different one from another. The third case (that of ~3) is an example of the fact that a non-repetitious operator can produce certain repetitions. This is the repetitious effect of a non-repetitious operator, to which we will return later.An iterative operation is noted as Rj(~i), of which Rj is either a single operator or operators. As it was already said t a necessary condition of the iteration is that the event in question has a clear boundary. Thus the operators concerned with the iterative operations have either the effect of giving a certain boundary, (in the case of non-bounded event): B@i , or the effect of repetition.The following operators indicated with capital letters are not individual operators,but group names.An individual operator has for instance a form like N 2 or F1/w(eek). It is not a proper repetitious operator. However, if the operator I operates on 92 or on 9x, a bounded proposition, it turns the proposition into that of repeated event. In this case, the iterative operation is effectuated indirectly. We call this iteration 'implicative iteration'.ex. 92 --John walks to the door; I .. for hours; I92 .. John walked to the door for hours.In order to differentiate this I92 from I91, we use the symbolXfor an implicative iteration: I(~92). (exactly~is~1 oral2) ~appears not only with the operator I, but also with N and F. Term 93 = Term(~93): It stopped to beat.As for the strings N91 and F91, they don't satisfy the basic condition of the iteration, i.e. 91 has no boundary. With some special interpretation rules, however, we can interprete them as N92 and F92 respectively. ex. F91: ?He walks three times a week.--@ He walks from the house to the station three times every week (F92).7.2.1The above operators N,F,I can be applied successively one after the other, but not every combination nor every application order is acceptable.F.I, I.F, F-N and N-I are acceptable, but N.F is not natural. I~N gives in a certain operational order the same effect as a single operator F, but in other orde~ other effects. Using complex operators, we get the output I(F92), I(F93!, F(N92), F!N93), N(I91), F(I91), I(N92), I(N93). Combination of more than two operators are also possible. Adding B, boundary giving operators, and G, prolonging operators, to the above operators, we can further extend the iterative operations. B is by it-self no repetitious operator. Its proper function is to give a boundary to a non-bounded proposition. One of the B-operators is Inch: Inch 91 .. he begins to write. Once a event gains a boundary, it can be repeated.(15) N(B91): He began to write three times.Another application order of N and B gives another kind of output. 16 In some cases, the operation of B brings about repetitions, as we have seen with the operator Inch. It is done in the combination of B and ~3" A repeated event, (which in fact has durative character like ~I), can again be given a boundary. And this renewed bou~ ded event can again be repeated• This makes a multiple iteration• The iteration can be explicit or implicative. The following examples given by Freed have also a multiple iterative structure, 'a series of series' according to her terminology. . G--The direction of an arrow in the figure indicates the written order of two ooerators in a form. The order of application in the operation is therefore inverse.It is often proposedto distinguish an event from its background (or its occasion)• The background is a time stretch in which the event takes place• From a pure theoretical viewpoint, the idea of the double structure of eventbackground is very helpful for analysis of ambiguous structures• I ex. La toupie a tourn~ trois fois.In this expression, 'trois fois' can be either the number of occurrences of the event (i.e. number of spins of the top) or the number of occasions on which the top spun. With the iterative operators the difference can be given clearly: N~3 and N(~3)•In the former case, the top spun three times on one occasion and in the latter case, the top spun several times on three occasions.The operators N,F,I are related with both the event and the background. Graphically the difference can be indicated as the figure 2. 2 I This example is borrowed from Rohrer. Operationally, if we differentiate the background from the event on the level of iterative operations, the rules must be too complicated. For the time being the operators N,F, I are used regardless whether they operate on the event or on the occasion.As for the negative cases of iteratire operations, there are several possibilities. Either a negeted iterative proposition remains still iterative or it becomes a non-iterative proposition. In other words, the negation affects the whole proposition in the case of total negation, and affects just the number of repetitions or the frequency in the case of partial negation. In the former case the scope of the nagation is larger than that of the iteration, and in the latter case, the scope of the negation is smaller than that of the iteration.(23) N@3:I1 est venu deux fois ~(N@5) or rather ~3:I1 n'est jamais venu. (Total negation) (~N)@3:I1 n'est pas venu deux fois. (En effe%, il n'est venu qu'une lois.) (Partial negation) N(~@3): I1 n'esz pas venu deux fois. D4j~ deux fois il n'est pas venu. F~3:I1 sortait trois fois par semalns. ~(F~3) or rather ~@3:I1 n'est jamais sorti. (Total negation) (~F)@3:I1 ne sortait pas trois fois par semaine: en effet il ne sortait que deux fois par semaine. (partial negation) F(~3): Trois jours par semaine, il ne sortait pas.It depends on which stage of the operations the negation is applied.Several kinds of interpretation rules are in view. The interpretation rules of the first category are those which give adequate interpretations to N@I, F~ I etc, in consideration of the context on the pragmatic level. N@I gains usually an interpretation of N~2, and F~I that of F@2. For example, "I walked three times this week" can be interpreted as: "I walke@ three times from the house to the station this week."The second interpretation rules are concordance rules, which connect diverse expressions with one same event. Different expressions in appearence or different means of expressions are interconnected by these rules. Eventually, the distinction of the background from the event can be effectuated by certain rules.
null
Main paper: basic condition of the iteration: The iterative aspect is one of sentential aspect and denotes plural occurrence of an event or an action. The iterative aspect concerns therefore the property of countability.The iteratire operations give the iterative aspect to a proposition and are concerned with the plurality of occurrences of the event.As we distinguish count nouns (count terms) from non-count nouns (mass terms), we distinguish countable events from noncountable events, or more precisely, the events of which the number of occurrences is countable and those of which the number of occurrences is non-countable.As a count noun has a clear boundary, a countable event also has to have a clear boundary. Countable events are for instance: he opens a window; he reads a book; he kicks a ball etc. Non-countable events are for instance: he swims; he sleeps deeply; he runs fast,etc.Only a countable event can be repeated: he opens three windows; he kicked the ball twice,etc. A n~n-countable event can't be repeated: ~he sleeps twice.The distinction of two kinds of events (and of two kinds of propositions), which also is called telic-atelic, cyclicnon-cyclic or bounded-non bounded distinction" is therefore necessary for the execution of the iterative operations.It must be useful to give here some remarks on the terminology.The terms such as 'iterative', 'repetitive', 'frequentative' and 'multiplicatire' are used very often as synonyms. However there are some works which distinguish them one from the other The term repetitive is used sometimes to indicate only one repetition and the term iterative to indicate more than two repetitions. And sometimes the term iterative is used for one repetition and the term frequentative is used for several repetitions.We use both of the terms 'iterative' and 'repetitive'~ (hence 'iteration' and 'repetition'~as synonyms. In this article 'repetition' means, in most of cases, two or more occurrences of a same event. But in order to prevent a misunderstanding, we rather use the term 'iteration'.A 'proposition' denotes an event and it is a neutral expression in the sense that the tense, aspect and mode operators operate on it. regular and irregular iteration: Two kinds of iterations are distinguished: regular and irregular iterations, i.e. the iterations which correspond to cardinal count adverbials and the iterations which correspond to frequency adverbials.A regular iteration is defined either by a regular frequency of the occurrence of the event, (called 'fixed frequency' by Stump) , or by a constant length of intervals between occurrences. The busses started at five-minute intervals.I These termes are used by Garey, Bull and Allen respectively.The extreme case of the regular iteration is called 'habitude'.(2) En ~t~, elle se levait ~ quatre heure s.A regular frequency or a constant interval is indicated by the operator F. An irregular iteration is indicated either with a number of occurrences of an event or with irregular lengths of intervals between occurrences.(3) Linda called you several times last night.Nous avons entendu le m~me bruit par intervalles.(Interval)Both the numerical indications and the indications of irregular intervals are given with the operator N.Considering the structure of a repeated event, we can distinguish several forms of repetitions, according as which constituent is affected. If we say,"She changes her dress several times a day", it is the object which is affected by the repetition.Using grammatical category-names we can indicate the repeated constituent as the following. On the actual stage we have no such a detailed mechanism to be able to differentiate the repeated constituent. Nor do we consider the differentiation necessary. We treat all these repetitions as having the type (Subj Pred)~,(in a more general form ~), and we find no inconvenience doing so.An event consists of several phases: the beginning, the middle, the end and eventually the result and the imminent phase, i.e. the phase directly preceding the beginning point.As for the repetition is concerned only a phaseincluding a culmination point is capable of repetition, because the repetition presuppos~ that the event has a (real or hypothetical) boundary. Like the distinction of the repeated constituent, the distinction of the repeated phase is not especially significative in the iterative operations. Besides, if necessary, we can treat each phase as an independent event: the beginning part ~' of the event ~ can be considered as an event. Thus, for the time being, the distinction of phases is also neglected in the iterative operations.Homogeneous iteration and heterogeneous iteration A homogeneous iteration is an ordinary iteration of the type(~)~ and a heterogeneous iteration is what is called by Imbs 'la r~p~tition d'alternance'. It is not the iteration of a simple event but the iteration of two or more mutually related events. It has the form: (~'÷~' '...)~ (7) J'allume et j'~teins une fois par minute.The most frequent case is the combination of two events, but the combination of three events is still possible: (8) Depuis une heure il va ~ la fen~tre tousles trois minutes, s'arr~te un moment et revient encore.The combination of more than three events is not natural. application order of tence and aspect operator: In the present article we are exclusively concerned with aspect operators and tense operators are not treated, though past tense sentenses are used as examples. We will be contented just to say that tense operators come after aspect operators in the operation order. A sentential aspect is the sythesis of the aspectual meanings of all constituents of the sentence.For the efficient execution of iterative operations as well as all aspectual operations we have to classify previously propositions ~i denoting events S i. For this classification we take accoufit of durative/non-durative and bounded/nonbounded characters of events. This classification is necessary also for other aspectual operations. In order to show the varidity of the classification, we give an example of other aspectual operations: the inchoative operation. Inch is a boundary giving operator and gives the initial border to any proposition, but the meaning of Inch(@ i) is different according to @i-With ~[, which doesn't imply any boundarz Inch functions to give the initial boundary.With @o, which implies an end point, inch fiEes the initial boundary. ex. @2 "" Bob builds a sandcastle;Inch(@2) .. Bob began to build a sandcastle.The length of the event is the time stretch, at the end of which Bob is supposed to complete the sandcastle. With @3 the condition is quite different. ~3, momentaneous proposition, implies no length (or no meaningful length) and the beginning point and the end point overlap each other. Inch(~3) gives automatical]y the iteration of the event and the initial boundary becoms the initial boundary of the prolonged event. ex. @3 "" he knocks (one time) on the door; Inch(@3) .. He began knocking (repeatedly) on the door.The function of the Inch is the same for all of three examples, but the meaning of the beginning is different one from another. The third case (that of ~3) is an example of the fact that a non-repetitious operator can produce certain repetitions. This is the repetitious effect of a non-repetitious operator, to which we will return later. basic operators: An iterative operation is noted as Rj(~i), of which Rj is either a single operator or operators. As it was already said t a necessary condition of the iteration is that the event in question has a clear boundary. Thus the operators concerned with the iterative operations have either the effect of giving a certain boundary, (in the case of non-bounded event): B@i , or the effect of repetition.The following operators indicated with capital letters are not individual operators,but group names.An individual operator has for instance a form like N 2 or F1/w(eek). It is not a proper repetitious operator. However, if the operator I operates on 92 or on 9x, a bounded proposition, it turns the proposition into that of repeated event. In this case, the iterative operation is effectuated indirectly. We call this iteration 'implicative iteration'.ex. 92 --John walks to the door; I .. for hours; I92 .. John walked to the door for hours.In order to differentiate this I92 from I91, we use the symbolXfor an implicative iteration: I(~92). (exactly~is~1 oral2) ~appears not only with the operator I, but also with N and F. Term 93 = Term(~93): It stopped to beat.As for the strings N91 and F91, they don't satisfy the basic condition of the iteration, i.e. 91 has no boundary. With some special interpretation rules, however, we can interprete them as N92 and F92 respectively. ex. F91: ?He walks three times a week.--@ He walks from the house to the station three times every week (F92). complex operators of n,f,i: 7.2.1The above operators N,F,I can be applied successively one after the other, but not every combination nor every application order is acceptable.F.I, I.F, F-N and N-I are acceptable, but N.F is not natural. I~N gives in a certain operational order the same effect as a single operator F, but in other orde~ other effects. Using complex operators, we get the output I(F92), I(F93!, F(N92), F!N93), N(I91), F(I91), I(N92), I(N93). Combination of more than two operators are also possible. Adding B, boundary giving operators, and G, prolonging operators, to the above operators, we can further extend the iterative operations. B is by it-self no repetitious operator. Its proper function is to give a boundary to a non-bounded proposition. One of the B-operators is Inch: Inch 91 .. he begins to write. Once a event gains a boundary, it can be repeated.(15) N(B91): He began to write three times.Another application order of N and B gives another kind of output. 16 In some cases, the operation of B brings about repetitions, as we have seen with the operator Inch. It is done in the combination of B and ~3" A repeated event, (which in fact has durative character like ~I), can again be given a boundary. And this renewed bou~ ded event can again be repeated• This makes a multiple iteration• The iteration can be explicit or implicative. The following examples given by Freed have also a multiple iterative structure, 'a series of series' according to her terminology. . G--The direction of an arrow in the figure indicates the written order of two ooerators in a form. The order of application in the operation is therefore inverse. event and background: It is often proposedto distinguish an event from its background (or its occasion)• The background is a time stretch in which the event takes place• From a pure theoretical viewpoint, the idea of the double structure of eventbackground is very helpful for analysis of ambiguous structures• I ex. La toupie a tourn~ trois fois.In this expression, 'trois fois' can be either the number of occurrences of the event (i.e. number of spins of the top) or the number of occasions on which the top spun. With the iterative operators the difference can be given clearly: N~3 and N(~3)•In the former case, the top spun three times on one occasion and in the latter case, the top spun several times on three occasions.The operators N,F,I are related with both the event and the background. Graphically the difference can be indicated as the figure 2. 2 I This example is borrowed from Rohrer. Operationally, if we differentiate the background from the event on the level of iterative operations, the rules must be too complicated. For the time being the operators N,F, I are used regardless whether they operate on the event or on the occasion. nagation of the iterative propositions: As for the negative cases of iteratire operations, there are several possibilities. Either a negeted iterative proposition remains still iterative or it becomes a non-iterative proposition. In other words, the negation affects the whole proposition in the case of total negation, and affects just the number of repetitions or the frequency in the case of partial negation. In the former case the scope of the nagation is larger than that of the iteration, and in the latter case, the scope of the negation is smaller than that of the iteration.(23) N@3:I1 est venu deux fois ~(N@5) or rather ~3:I1 n'est jamais venu. (Total negation) (~N)@3:I1 n'est pas venu deux fois. (En effe%, il n'est venu qu'une lois.) (Partial negation) N(~@3): I1 n'esz pas venu deux fois. D4j~ deux fois il n'est pas venu. F~3:I1 sortait trois fois par semalns. ~(F~3) or rather ~@3:I1 n'est jamais sorti. (Total negation) (~F)@3:I1 ne sortait pas trois fois par semaine: en effet il ne sortait que deux fois par semaine. (partial negation) F(~3): Trois jours par semaine, il ne sortait pas.It depends on which stage of the operations the negation is applied. interpretation and concordance rules: Several kinds of interpretation rules are in view. The interpretation rules of the first category are those which give adequate interpretations to N@I, F~ I etc, in consideration of the context on the pragmatic level. N@I gains usually an interpretation of N~2, and F~I that of F@2. For example, "I walked three times this week" can be interpreted as: "I walke@ three times from the house to the station this week."The second interpretation rules are concordance rules, which connect diverse expressions with one same event. Different expressions in appearence or different means of expressions are interconnected by these rules. Eventually, the distinction of the background from the event can be effectuated by certain rules. : We present in this article, as a part of aspectual operation system, a generation system of iterative expressions using a set of operators called iterative operators. In order to execute the iterative operations efficiently, we have classified previously propositions denoting a single occurrence of a single event into three groupes. The definition of a single event is given recursively. The classification has been carried out especially in consideration of the duratire / non-durative character of the denoted events and also in consideration of existence / non-existence of a culmination point (or a boundary) in the events. The operations concerned with iteration have either the effect of giving a boundary to an event ( in the case of a non-bounded event) or of extending an event through repetitions. The operators concerned are: N,F .. direct iterative operators; I,G .. boundary giving operators; I .. extending operator.There are direct and indirect operations: the direct ones change a non-repetitious proposition into a repetitious one directly, whereas the indirect ones change it indirectly. The indirect iteration is indicated with . The scope of each operator is not uniquely definable, though the mutual relation of the operators can be given more or less explicitly.The system of the iterative operations, which makes a part of aspectual operation system, is based on the assumption that the general mechanism of repetition is language independent and can be reduced to a small number of operations, though language expressions of repetition are different from language to language. It must be noticed that even in one language there are usually several means to express repetitious events. We know that "il lui cognait la t~te contre lemur" and "il lui a cogn~ deux ou trois fois la t~te contre lemur", the examples given by W. Pollak, express the same event.We have also linguistic means for iterative expressions on all linguistic levels: morphological, syntactical, semantic, pragmatic etc.As the general form of repetition we use ~ = (~i~ in which ~ is the whole event, ~ia single occurrence of a single event and* an iteration indicator. For example:: (a series of) explosions took place 93: a single explosion took place : indefinit number of times ~i denotes actually a proposition describing a single event S i. ~ sign will be replaced later by a singIe or complex operator or operators, which operate(s) on ~i-We hope also to be able to give various expressions to the same event and for that purpose we are planning to have a set of interpretation rules.The language mainly concerned is Japanese, but in this article examples are given in French, in English or in German. Appendix:
null
null
null
null
{ "paperhash": [ "rohrer|l’analyse_logique_des_temps_du_passe_en_francais:_comment_on_peut_appliquer_la_distinction_entre_nom_de_matiere_et_nom_comptable_aux_temps_du_verbe" ], "title": [ "L’ANALYSE LOGIQUE DES TEMPS DU PASSE EN FRANCAIS: Comment on peut appliquer la distinction entre nom de matiere et nom comptable aux temps du verbe" ], "abstract": [ "Dans cet expose j'aimerais prouver qu'il y a des rapports tr~s ~troits entre la s~mantique nominale et la s~mantique verbale. J'essaierai d'appliquer la distinction entre nom comptable (angl. count noun) et nom de mati~re (angl. mass noun) au domaine du verbe. En particulier il sera d~montr~ qu'un verbe (ou syntagme verbal) ~ l'imparfait d~note une entit~ du mSme type que celle d~not~e par un nom de mati~re. Un syntagme au pass~ simple ou au passe compose par contre denote une entit~ qui est analogue ~ celle d~not~e par un nom comptable. Exprim~ d'une fa$on moins philosophique: je veux expliquer pourquoi on ne peut pas dire" ], "authors": [ { "name": [ "C. Rohrer" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "41024531" ], "intents": [ [] ], "isInfluential": [ false ] }
- Problem: The paper addresses the classification and operation of iterative expressions within an aspectual operation system, focusing on the efficient execution of iterative operations based on the durative/non-durative and bounded/non-bounded characteristics of events. - Solution: The paper proposes a system of iterative operators, including direct iterative operators, boundary-giving operators, and extending operators, to handle the repetition of events efficiently by either giving a boundary to non-bounded events or extending events through repetitions.
497
0
null
null
null
null
null
null
null
null
737de4597adc92333e9b46c2f0e890dc99a56752
34954301
null
Vocal Interface for a Man-Machine Dialog
We describe a dialogue-handling module used as an interface between a vocal terminal and a taskoriented device (for instance : a robot manipulating blocks). This module has been specially designed to be implanted on a single board using microprocessor, and inserted into the vocal terminal which already comprises a speech recognition board and a synthesis board. The entire vocal system is at present capable of conducting a real time spoken dialogue with its user.
{ "name": [ "Beroule, Dominique" ], "affiliation": [ null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
16
1
null
A great deal of interest is actually being shown in providing computer interfaces through dialog processing systems using speech input and output (Levinson and Shipley, 1979) . In the same time, the amelioration of the microprocessor technology has allowed the implantation of word recognition and text-to-speech synthesis systems on single boards (Li~nard and Mariani, 1982 ; Gauvain, 1983 ; Asta and Li~nard, 1979) ; in our laboratory, such modules have been integrated into a compact unit that forms an autonomous vocal processor which has applications in a number of varied domains : vocal command of cars, of planes, office automation and computer-aided learning (N~el et al., 1982) .Whereas most of the present language understanding systems require large computational resources, our goal has been to implement a dialoghandling board in the LIMSI's Vocal Terminal.The use of micro-systems introduces memory size and real-time constraints which have incited us to limit ourselves in the use of presently available computational linguistic techniques. Therefore, we have taken inspiration from a simple model of semantic network ; for the same reasons, the initial parser based on an Augmented Transition Network (Woods, 1970) and implemented on an IBM 370 (Memmi and Mariani, 1982) was replaced by another less time-and memory-consuming one.The work presented herein extends possible application fields by allowing an interactive vocal relation between the machine and its user for the execution of a specific task : the application that we have chosen is a man-machine communication with a robot manipulating blocks and using a Plan Generating System. Once the acoustic processing of the speech signal is performed by the 250 word-based recognition board, syntactic analysis is carried out.SPEECH I RECOGNI ZER SEMANTI C [ SYNTACTI C PROCESSI NG ANALYSIS SEMANTI C ] TREATMENTIt may be noted that response time and word confusions increase with the vocabulary size of word recognition systems. To limit the degradation of performance, syntactic information is used : words that can possibly follow a given word may be predicted at each step of the recognition process with the intention of reducing vocabulary.In order to build a representation of the deep structure of an input sentence, parameters requested by the semanticprocedures must be filled with the correct values. The parsing method that we de ~ velopped considers the naturel language utterances as a set of noun phrases connected with function words (prepositions, verbs ...) which specify their relationships. At the present time, the set of noun phrases is obtained by segmenting the utterance at each function word. The computational semantic memory is inspired by the Collins and Quillian model, a hierarchical network in which each node represents a concept. Properties can be assigned to each node, which also inherits those of its ancestors. Our choice has been influenced by the desire to design a system which would be able to easily learn new conceptS ; that is, to complete or to modify its knowledge according to information coming from a vocal input/ output system. Each noun of the vocabulary is represented by a node in such a tree structure. The meaning of any given verb is provided by rules that indicate the type of objects that can be related. As far as adjectives are concerned, they are arranged in exclusive property groups. The knowledge-based data (which may be enlarged by information provided by the vocal channel) is complemented by temporary data which chronologically contain, in abbreviated form, events evoked during the dialogue.The small amount of data representing a given universe allows us to approach the computational treatment of these two complementary and contrary components of dialogue: learning and contestation.Every time an assertion is proposed by the user a procedure parses its semantic validity by answering the question "Does this sentence fit with the current state of the knowledge data ?". If a contradiction is detected, it is pointed out to the user who must justify his proposal. If the user persists in his declaration, the machine may then modify its universe knowledge, otherwise the utterance is not taken into account.When no contradiction is encountered, the program enters into a learning process adding to the temporary data or knowledge-based data. These assertions, characterized by the presence of a non-action verb, permit both the complete construction of the semantic network and of the concept relation rules specifying the possible entities that can serve as arguments for a predicate.Although most of our knowledge results from long nurturing and frequent interactions with the outside world, it is possible to give an approximate meaning to concrete objects and verbs by using an elementary syntax. A new concept may be taught by filling in its position within the semantic network and possibly associating it with properties that will differentiate it from its brother nodes. Concept relation rules can be learned, too.
null
Sentences involving an action verb are translated into an unambiguous representation which condenses and organizes information into the very same form as that of the concept relation rules from knowledge data. Therefore, semantic validity can be easily tested by a pattern-matching process. A semantic event reduced to a nested-triplet structure and considered as valid is then inserted in the dynamic-events memory, and can be requested later on by the question-answering process.Although the language is limited to a small subset of natural French, several equivalent syntactic structures are allowed to express a given event ; in order to avoid storing multiple representations of the same event, paraphrases of a given utterance are reduced to a single standard form.One of the task effected by a language understanding system consists of recognizing the concepts that are evoked inside the input utterances. As soon as ambiguities are detected, they are resolved through interaction with the user. Relative~ clauses are not represented in the canonical form of the utterance in which they appear, but they are only used to determine which concept is in question.article i -Nun ! -Adjective I -Verb -article 2 -Adjec. 2 -Nun 2 abbreviated form : @ (( NI A1 )( N2 A2 ))) = semantic event E relation rule n ° i : i p~2) ) ((o~2 p~2) (022 E allowable (~ 3 (i,j) / V k = i, 2 i V .= R 0 i N k E ~ (kj) Pkj E ~-~ (N k) Pkj ~ AkA module of the synthesis process takes any French text and determines the elements necessary for the diphone synthesis, with the help of a dictionnary containing pronunciation rules and their exceptions (Prouts, 1979) . However, some ambiguities concerning text-to-speech transcription can still remain and cannot be resolved without syntactico-semantic information ; for instance : "Les poules du couvent couvent" (the convent hens are sitting on their eggs) is pronounced by the synthesizer : / I £ p u I d y k u v ~ k u v E / (the convent hens ~onvent).To deal with that problem, we may send the synthesizer the phonetic form of the words.The dialog experiment is presently running on a PDP 11/23 MINC and on an INTEL development system with a VLISP interpreter in real-time and using a series interface with the vocal terminal.The isolated word recognition board we are using for the moment makes the user pause for approximately half a second between each word he pronounces. In the near future we plan to replace this module by a connected word system which will make the dialog more natural. It may be noted that the compactness of the understanding program allows its implantation on a microprocessor board which is to be inserted in the vocal terminal.At present we apply ourselves to make the dialog-handling module easily adaptable to various domains of application. D 1 MACHI NE Figure 6 . Multibus configuration of the Vocal Terminal
Input utterances beginning with an action verb specify an order that the machine connected to the vocal interface is supposed to execute ; in addition to the deep structure of this natural language message, a formal command language message is built and then sent to the machine. The task universe memory is modified in order to reflect the execution of a user's command.User : Prends la pyramide qui est sur la table et pose. la sur le gros cube (grasp the pyramid which is on the table and put it on the big cube) Machine : S'agit-il du gros cube 3 ?(are you talking of the big cube 3 ?) User : Oui In everyday language, intonation often contitutes the marker that discriminates between questions and assertions. Since prosody information is not presently taken into account by the word recognition system, the presence of an interrogative pronoun switches on the information research processing in permanent knowledge-data or in dynamicevents memory. U : Qui lit un livre ? (Who is reading a book ?) S : Un homme lit un gros livre (A man is reading a thick book)When a certain amount of acoustical components in a sentence have not been recognized, the system asks for the user to repeat his assertion. This process consists of inserting semantic entities into the suitable syntactic diagram which depends on the computational procedure that is activated (question answering, contradiction, learning, asking for specifications ...). Since each syntactic variation of a word corresponds to a single semantic representation, sentence generation makes use of verb conjugation procedures and concordance procedures.In order to improve the natural quality of speech, different types of sentences expressing one same idea may be generated in a pseudo-random manner. The same question asked to the system several times can thus induce different formulated responses.
null
Main paper: descriptive utterances: Sentences involving an action verb are translated into an unambiguous representation which condenses and organizes information into the very same form as that of the concept relation rules from knowledge data. Therefore, semantic validity can be easily tested by a pattern-matching process. A semantic event reduced to a nested-triplet structure and considered as valid is then inserted in the dynamic-events memory, and can be requested later on by the question-answering process.Although the language is limited to a small subset of natural French, several equivalent syntactic structures are allowed to express a given event ; in order to avoid storing multiple representations of the same event, paraphrases of a given utterance are reduced to a single standard form.One of the task effected by a language understanding system consists of recognizing the concepts that are evoked inside the input utterances. As soon as ambiguities are detected, they are resolved through interaction with the user. Relative~ clauses are not represented in the canonical form of the utterance in which they appear, but they are only used to determine which concept is in question.article i -Nun ! -Adjective I -Verb -article 2 -Adjec. 2 -Nun 2 abbreviated form : @ (( NI A1 )( N2 A2 ))) = semantic event E relation rule n ° i : i p~2) ) ((o~2 p~2) (022 E allowable (~ 3 (i,j) / V k = i, 2 i V .= R 0 i N k E ~ (kj) Pkj E ~-~ (N k) Pkj ~ AkA module of the synthesis process takes any French text and determines the elements necessary for the diphone synthesis, with the help of a dictionnary containing pronunciation rules and their exceptions (Prouts, 1979) . However, some ambiguities concerning text-to-speech transcription can still remain and cannot be resolved without syntactico-semantic information ; for instance : "Les poules du couvent couvent" (the convent hens are sitting on their eggs) is pronounced by the synthesizer : / I £ p u I d y k u v ~ k u v E / (the convent hens ~onvent).To deal with that problem, we may send the synthesizer the phonetic form of the words.The dialog experiment is presently running on a PDP 11/23 MINC and on an INTEL development system with a VLISP interpreter in real-time and using a series interface with the vocal terminal.The isolated word recognition board we are using for the moment makes the user pause for approximately half a second between each word he pronounces. In the near future we plan to replace this module by a connected word system which will make the dialog more natural. It may be noted that the compactness of the understanding program allows its implantation on a microprocessor board which is to be inserted in the vocal terminal.At present we apply ourselves to make the dialog-handling module easily adaptable to various domains of application. D 1 MACHI NE Figure 6 . Multibus configuration of the Vocal Terminal orders: Input utterances beginning with an action verb specify an order that the machine connected to the vocal interface is supposed to execute ; in addition to the deep structure of this natural language message, a formal command language message is built and then sent to the machine. The task universe memory is modified in order to reflect the execution of a user's command.User : Prends la pyramide qui est sur la table et pose. la sur le gros cube (grasp the pyramid which is on the table and put it on the big cube) Machine : S'agit-il du gros cube 3 ?(are you talking of the big cube 3 ?) User : Oui In everyday language, intonation often contitutes the marker that discriminates between questions and assertions. Since prosody information is not presently taken into account by the word recognition system, the presence of an interrogative pronoun switches on the information research processing in permanent knowledge-data or in dynamicevents memory. U : Qui lit un livre ? (Who is reading a book ?) S : Un homme lit un gros livre (A man is reading a thick book)When a certain amount of acoustical components in a sentence have not been recognized, the system asks for the user to repeat his assertion. This process consists of inserting semantic entities into the suitable syntactic diagram which depends on the computational procedure that is activated (question answering, contradiction, learning, asking for specifications ...). Since each syntactic variation of a word corresponds to a single semantic representation, sentence generation makes use of verb conjugation procedures and concordance procedures.In order to improve the natural quality of speech, different types of sentences expressing one same idea may be generated in a pseudo-random manner. The same question asked to the system several times can thus induce different formulated responses. i introduction: A great deal of interest is actually being shown in providing computer interfaces through dialog processing systems using speech input and output (Levinson and Shipley, 1979) . In the same time, the amelioration of the microprocessor technology has allowed the implantation of word recognition and text-to-speech synthesis systems on single boards (Li~nard and Mariani, 1982 ; Gauvain, 1983 ; Asta and Li~nard, 1979) ; in our laboratory, such modules have been integrated into a compact unit that forms an autonomous vocal processor which has applications in a number of varied domains : vocal command of cars, of planes, office automation and computer-aided learning (N~el et al., 1982) .Whereas most of the present language understanding systems require large computational resources, our goal has been to implement a dialoghandling board in the LIMSI's Vocal Terminal.The use of micro-systems introduces memory size and real-time constraints which have incited us to limit ourselves in the use of presently available computational linguistic techniques. Therefore, we have taken inspiration from a simple model of semantic network ; for the same reasons, the initial parser based on an Augmented Transition Network (Woods, 1970) and implemented on an IBM 370 (Memmi and Mariani, 1982) was replaced by another less time-and memory-consuming one.The work presented herein extends possible application fields by allowing an interactive vocal relation between the machine and its user for the execution of a specific task : the application that we have chosen is a man-machine communication with a robot manipulating blocks and using a Plan Generating System. Once the acoustic processing of the speech signal is performed by the 250 word-based recognition board, syntactic analysis is carried out.SPEECH I RECOGNI ZER SEMANTI C [ SYNTACTI C PROCESSI NG ANALYSIS SEMANTI C ] TREATMENTIt may be noted that response time and word confusions increase with the vocabulary size of word recognition systems. To limit the degradation of performance, syntactic information is used : words that can possibly follow a given word may be predicted at each step of the recognition process with the intention of reducing vocabulary.In order to build a representation of the deep structure of an input sentence, parameters requested by the semanticprocedures must be filled with the correct values. The parsing method that we de ~ velopped considers the naturel language utterances as a set of noun phrases connected with function words (prepositions, verbs ...) which specify their relationships. At the present time, the set of noun phrases is obtained by segmenting the utterance at each function word. The computational semantic memory is inspired by the Collins and Quillian model, a hierarchical network in which each node represents a concept. Properties can be assigned to each node, which also inherits those of its ancestors. Our choice has been influenced by the desire to design a system which would be able to easily learn new conceptS ; that is, to complete or to modify its knowledge according to information coming from a vocal input/ output system. Each noun of the vocabulary is represented by a node in such a tree structure. The meaning of any given verb is provided by rules that indicate the type of objects that can be related. As far as adjectives are concerned, they are arranged in exclusive property groups. The knowledge-based data (which may be enlarged by information provided by the vocal channel) is complemented by temporary data which chronologically contain, in abbreviated form, events evoked during the dialogue.The small amount of data representing a given universe allows us to approach the computational treatment of these two complementary and contrary components of dialogue: learning and contestation.Every time an assertion is proposed by the user a procedure parses its semantic validity by answering the question "Does this sentence fit with the current state of the knowledge data ?". If a contradiction is detected, it is pointed out to the user who must justify his proposal. If the user persists in his declaration, the machine may then modify its universe knowledge, otherwise the utterance is not taken into account.When no contradiction is encountered, the program enters into a learning process adding to the temporary data or knowledge-based data. These assertions, characterized by the presence of a non-action verb, permit both the complete construction of the semantic network and of the concept relation rules specifying the possible entities that can serve as arguments for a predicate.Although most of our knowledge results from long nurturing and frequent interactions with the outside world, it is possible to give an approximate meaning to concrete objects and verbs by using an elementary syntax. A new concept may be taught by filling in its position within the semantic network and possibly associating it with properties that will differentiate it from its brother nodes. Concept relation rules can be learned, too. Appendix:
null
null
null
null
{ "paperhash": [ "memmi|arbus,_a_tool_for_developing_application_grammars", "levinson|a_conversational-mode_airline_information_and_reservation_system_using_speech_input_and_output" ], "title": [ "ARBUS, A Tool for Developing Application Grammars", "A conversational-mode airline information and reservation system using speech input and output" ], "abstract": [ "The development of a natural language system usually requires frequent changes to the grammar used. It is then very useful to be able to define and modify the grammar rules easily, without having to tamper with the parsing program. The ARBUS system was designed to help develop grammars for natural language processing. With this system one can build, display, test, modify and file a grammar interactively in a very convenient way. This was achieved by packaging a parser and a grammar editor with an elaborate interface which isolates the user from implementation details and guides him as much as possible.", "We describe a conversational-mode, speech-understanding system which enables its user to make airline reservations and obtain timetable information through a spoken dialog. The system is structured as a three-level hierarchy consisting of an acoustic word recognizer, a syntax analyzer, and a semantic processor. The semantic level controls an audio response system making two-way speech communication possible. The system is highly robust and operates on-line in a few times real time on a laboratory minicomputer. The speech communication channel is a standard telephone set connected to the computer by an ordinary dialed-up line." ], "authors": [ { "name": [ "D. Memmi", "J. Mariani" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Levinson", "K. Shipley" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "5222428", "22318958" ], "intents": [ [ "methodology" ], [] ], "isInfluential": [ false, false ] }
Problem: The paper aims to describe a dialogue-handling module designed to facilitate real-time spoken dialogue between a user and a task-oriented device, such as a robot manipulating blocks, through a vocal terminal. Solution: The hypothesis of the paper is that by implementing a dialog-handling board in the LIMSI's Vocal Terminal, using micro-systems and computational linguistic techniques, it is possible to enable interactive vocal communication for executing specific tasks, such as man-machine communication with a robot manipulating blocks.
497
0.002012
null
null
null
null
null
null
null
null
3e6bd823b0c71c9750df4fd6d637bd98c48965f5
250424
null
How to Parse Gaps in Spoken Utterances
We describe GLP, a chart parser that will be used as a SYNTAX module of the Erlangen Speech Understanding System. GLP realizes an agenda-based multiprocessing scheme, which allows easily to apply various parsing strategies in a transparent way. We discuss which features have been incorporated into the parser in order to process speech data, in particular the ability to perform direction independent island parsing, to handle gaps in the utterance and its hypothesis scoring scheme.
{ "name": [ "Goerz, G. and", "Beckstein, C." ], "affiliation": [ null, null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
10
3
null
null
null
null
null
GLP (Goerz 1981 (Goerz , 1982a ) is a multistrategy chart-parser, which has special features for the analysis of fragmentary and defective input data as it is the case with speech. GLP, a descendant of a version of GSP by M. Kay (1975) , has been implemented in InterLISP.It can be used as a stand-alone system, to e.g. perform experiments, test various parsing strategies, or assist in the development of a linguistic data base. While for this purpose it got a cooperative, user-friendly interface, we also implemented an interface to the Erlangen Speech System (Niemann 1982) . The Speech System's architecture is similar to that of HEARSAY-II, so that it employs a variety of knowledge sources, among which are modules for phonological, syntactic, semantic and pragmatic analysis.Although the structure of GLP does not limit its ability to perform syntactic analysis only -it is suitable for morphological or the non-inferential part of semantic analysis as well (see the similar system UCP, Sagvall-Hein (1982)) -, its role in the Speech System is constrained to the first mentioned task.The chart parsing idea was originally conceived and further developed by Martin Kay (1980) . Its basic design extends the Well Formed Substring Table, a device used in many parsers to store intermediary re-sults, which is represented as a directed graph, and makes it into an active parsing agent.Initially, the chart is set up as a set of vertices which mark beginning and end of an utterance and the boundaries between words. The vertices are connected by (inactive) edges which carry the lexical information of the rasp. words. Whenever a constituent is found during the parsing process, a new inactive edge is added to the chart.In contrast to that, active edges represent incomplete constituents; they indicate an intermediate state in the search for a phrase. Using this data structure, GLP simulates internally a multiprocessing scheme by means of agendas. An agenda is a list of tasks to be carried out over the chart. Tasks are processing steps of different kinds, e.g. genuine analysis ~ rocesses (Syntax-and Scan-Tasks), input output with the outside world (Listen-and Talk-Tasks), and supervision to govern the analysis process in the large. In order to achieve a clear modularization, GLP is currently employing three agendas:Main for Syntax-and Scan-Tasks, Communication for Listen-and Talk-Tasks, and Control for Supervisor-Tasks.Whenever edges are added to the chart, any new tasks that can be created as a result, are scheduled on an agenda. The selection of tasks from an agenda is performed by its selector, which can, in the extreme cases, either perform a depth-first (agenda as a stack) or a breadth-first (agenda as a queue) search strategy.The question of the rule invocation strategy (or parsing strategy) is independent of the choice of the search strategy. Different parsing strategies such as top-down or bottom-up are reflected in different conditions for the introduction of empty active edges. An empty edge represents the task to search a constituent; it points to the same vertex where it is emerging from, indicating the search direction.Scheduling of tasks on an agenda is performed by its scheduler which assigns priorities to tasks. GLP's operation in general is controlled by Supervisor-Tasks on the Control agenda, while the other tasks are executed by specific processors (interpreters).The overall control mechanism is embedded in a general interrupt system. Interrupts are caused when the Main agenda -or even a particular task -is done or when the currently available resources are used up, in particular time and number of tasks. Whenever an interrupt occurs, the currently active task is finished and control is passed to the selector of the Control agenda. Then and only then input/output operations can be performed, new resources can be assigned, and GLP's strategy can be changed (see IV).We do not claim any psycholinguistic validity for this kind of system architecture, although M. Kay (1980) argues that an agenda-based model may lead to significant insights in cognitive psychology.In general, there are two parts of the problem of syntactic and semantic analysis: Judgment or decision (whether a given string is grammatical or not) and representation or interpretation (to decide how the pieces of the utterance fit together and what they mean).In a speech understanding system, hypotheses in all levels of abstraction carry quality scores, which play an important role in the overall strategy of the system. GLP receives word hypotheses from the Speech System's blackboard, which have been produced by the word hypothesizer, inserts appropriate word edges into its chart, extracts their quality scores and attaches derived priority scores to the resp. edges as features.If gaps in the utterance are recognized (i.e. there are no word hypotheses in a certain time interval with a score larger than a given threshold value), edges are introduced which are marked with the universal category GAP and a score feature which has the threshold as its value.GLP assigns scores to phrases. We are currently developing an explicit focussing strategy which is similar to Woods' (1982) Shortfall Scoring method. This method assigns priorities to partial interpretations, the so called islands, by comparing the actual score for an island with the maximum attainable score for the time period covered by the island and adding to it the maximum attainable :~cores for its environment. It can be shown that this priority scheme guarantees the discovery of the best matching interpretation of the utterance.In the special case of a GAP edge, a task is scheduled automatically looking for matching word hypotheses which have possibly been generated in the meantime. With each attempt to find a matching word hypothesis the GAP edges' score is reduced by a certain percentage until it falls below a second threshold. In this case of a failure GLP constructs an incomplete phrase hypothesis out of the available information including the pattern which characterizes the missing word(s). In addition, while building phrase hypotheses, GLP can also take into consideration preference scores (or weights) for different branches in the grammar, but our grammar does not employ this feature at the present time.Incremental parsing is a salient feature of GLP. There is no distinct setup phase; GLP starts to work as soon as it receives the first (some ten) word hypotheses with a sufficient quality score. Whenever an interrupt occurs, new word hypotheses can be incorporated into the chart. These hypotheses are provided by the Speech System's word hypothesizer, either continuously or as an answer to a request by GLP, resulting from gap processing, that has the form of an incomplete word hypothesis which is to be filled. In the latter case active edges act as demons waiting for new information to be imbedded in already generated partial structures in such a way that no duplicate analysis has to be performed. Since the Speech System's overall strategy can decide when new word hypotheses are delivered, a data-driven influence on GLP's local strategy is achieved.The required input/output processes for hypotheses are performed by Listen-and Talk-Tasks, which are activated by the selector attached to the Communication agenda. The Communication selector is triggered by interrupt conditions, which are due to the mentioned overall parsing strategy. The communication channel to the outside world can be parameterized by a general feature, the Wait list. Whenever the name of a processor, e.g. Listen or Talk, is put on the Wait list, this processor is blocked until it is removed from the Wait list. Because blocking of any processor causes a redistribution of the available resources, it effects in conseq,~ence GLP's strategy. Direct influence on the parsing strategy is achieved by temporarily blocking the Syntax or Scan processors. Furthermore, the strategy can be modified explicitly by attaching a new selector to the Main agenda and by setting Various global strategy parameters. These include threshold values, e.g. for gap processing, as well as limits for resources, the most important of which is time. This flexibility in strategy variation is important for an empirical evaluation of our approach.Although we have not yet analyzed GGP's parsing complexity in general, some limiting factors for chart parsing are well known by investigations on the context free case by Sheil (1976) : The number of steps is o~ O (nD), the space requirements of 0 (n 2) independent of the parsing strategy, where n is the length of the input sentence. The size of the grammar does not influence complexity, but its branching factor, which is a measure for its degree of nondeterminism, acts as a proportionality factor.In the following we like to point out why we think that GLP's mechanism has several advantages over traditional island parsing schemes-(e.g. Woods 1976) . In order to process defective input data, the parser must be able to start its operation at any point within the chart. In general, our main parsing direction is from left to right. With respect to the expansion of islands, in particular from right to left, our mechanism is simpler, because, for example, there is no explicit representation of paths. For Syntax-Tasks, which are proceeding in the usual way from left to right, this information is already attached to their corresponding active edges. Scan-Tasks, which are seeking to the left of the island, access information attached to the vertex they are starting from. Phrase hypotheses are only generated by Syntax-Tasks; if an island cannot be expanded to the right, a Scan-Task which seeks an anchor point for an active edge to the left of the island is scheduled automatically. While in the usual island parsing schemes the focus of attention is not shifted left of an island before appropriate hypotheses are generated, (e.g. if there is a gap -of arbitrary duration -left of the island), GLP seeks for an anchor point, attaches an active edge to it and schedules a corresponding Syntax-Task. This task will then and only then generate a phrase hypothesis. Furthermore, we think that our scheme is combinatorially more efficient, because fewer hypotheses are generated.This fact results from a more adequate representation of an island's left context:In usual island parsing expansions to the left are performed without regarding the left context of the island as long as only predictions exist and no hypotheses are available.The goal of the parsing strategy we are developing now is that semantic analysis at the constituent level can be started as soon ~s a local constituent is syntactically recognized (bottom-up).The resulting semantic hypotheses, produced by the SEMANT[CS module and delivered through the Speech System's blackboard, which contain semantically based predictions, are then matched against the chart. This process will lead to the generation of new tasks, which in turn may produce new word and phrase hypotheses, so that present islands can be expanded and merged.Thanks to Prof. G. Nees, who continuously encouraged us in our work on GLP, and to Prof. K.M. Colby, Roger Parkison and Dan Christinaz of the Neuropsychiatric Institute, UCLA, where the first author learnt a lot on robust parsing during a research stay sponsored by the German Academic Exchange Service (DAAD).
Main paper: i. glp, a general linguistic processor: GLP (Goerz 1981 (Goerz , 1982a ) is a multistrategy chart-parser, which has special features for the analysis of fragmentary and defective input data as it is the case with speech. GLP, a descendant of a version of GSP by M. Kay (1975) , has been implemented in InterLISP.It can be used as a stand-alone system, to e.g. perform experiments, test various parsing strategies, or assist in the development of a linguistic data base. While for this purpose it got a cooperative, user-friendly interface, we also implemented an interface to the Erlangen Speech System (Niemann 1982) . The Speech System's architecture is similar to that of HEARSAY-II, so that it employs a variety of knowledge sources, among which are modules for phonological, syntactic, semantic and pragmatic analysis.Although the structure of GLP does not limit its ability to perform syntactic analysis only -it is suitable for morphological or the non-inferential part of semantic analysis as well (see the similar system UCP, Sagvall-Hein (1982)) -, its role in the Speech System is constrained to the first mentioned task.The chart parsing idea was originally conceived and further developed by Martin Kay (1980) . Its basic design extends the Well Formed Substring Table, a device used in many parsers to store intermediary re-sults, which is represented as a directed graph, and makes it into an active parsing agent.Initially, the chart is set up as a set of vertices which mark beginning and end of an utterance and the boundaries between words. The vertices are connected by (inactive) edges which carry the lexical information of the rasp. words. Whenever a constituent is found during the parsing process, a new inactive edge is added to the chart.In contrast to that, active edges represent incomplete constituents; they indicate an intermediate state in the search for a phrase. Using this data structure, GLP simulates internally a multiprocessing scheme by means of agendas. An agenda is a list of tasks to be carried out over the chart. Tasks are processing steps of different kinds, e.g. genuine analysis ~ rocesses (Syntax-and Scan-Tasks), input output with the outside world (Listen-and Talk-Tasks), and supervision to govern the analysis process in the large. In order to achieve a clear modularization, GLP is currently employing three agendas:Main for Syntax-and Scan-Tasks, Communication for Listen-and Talk-Tasks, and Control for Supervisor-Tasks.Whenever edges are added to the chart, any new tasks that can be created as a result, are scheduled on an agenda. The selection of tasks from an agenda is performed by its selector, which can, in the extreme cases, either perform a depth-first (agenda as a stack) or a breadth-first (agenda as a queue) search strategy.The question of the rule invocation strategy (or parsing strategy) is independent of the choice of the search strategy. Different parsing strategies such as top-down or bottom-up are reflected in different conditions for the introduction of empty active edges. An empty edge represents the task to search a constituent; it points to the same vertex where it is emerging from, indicating the search direction.Scheduling of tasks on an agenda is performed by its scheduler which assigns priorities to tasks. GLP's operation in general is controlled by Supervisor-Tasks on the Control agenda, while the other tasks are executed by specific processors (interpreters).The overall control mechanism is embedded in a general interrupt system. Interrupts are caused when the Main agenda -or even a particular task -is done or when the currently available resources are used up, in particular time and number of tasks. Whenever an interrupt occurs, the currently active task is finished and control is passed to the selector of the Control agenda. Then and only then input/output operations can be performed, new resources can be assigned, and GLP's strategy can be changed (see IV).We do not claim any psycholinguistic validity for this kind of system architecture, although M. Kay (1980) argues that an agenda-based model may lead to significant insights in cognitive psychology.In general, there are two parts of the problem of syntactic and semantic analysis: Judgment or decision (whether a given string is grammatical or not) and representation or interpretation (to decide how the pieces of the utterance fit together and what they mean).In a speech understanding system, hypotheses in all levels of abstraction carry quality scores, which play an important role in the overall strategy of the system. GLP receives word hypotheses from the Speech System's blackboard, which have been produced by the word hypothesizer, inserts appropriate word edges into its chart, extracts their quality scores and attaches derived priority scores to the resp. edges as features.If gaps in the utterance are recognized (i.e. there are no word hypotheses in a certain time interval with a score larger than a given threshold value), edges are introduced which are marked with the universal category GAP and a score feature which has the threshold as its value.GLP assigns scores to phrases. We are currently developing an explicit focussing strategy which is similar to Woods' (1982) Shortfall Scoring method. This method assigns priorities to partial interpretations, the so called islands, by comparing the actual score for an island with the maximum attainable score for the time period covered by the island and adding to it the maximum attainable :~cores for its environment. It can be shown that this priority scheme guarantees the discovery of the best matching interpretation of the utterance.In the special case of a GAP edge, a task is scheduled automatically looking for matching word hypotheses which have possibly been generated in the meantime. With each attempt to find a matching word hypothesis the GAP edges' score is reduced by a certain percentage until it falls below a second threshold. In this case of a failure GLP constructs an incomplete phrase hypothesis out of the available information including the pattern which characterizes the missing word(s). In addition, while building phrase hypotheses, GLP can also take into consideration preference scores (or weights) for different branches in the grammar, but our grammar does not employ this feature at the present time.Incremental parsing is a salient feature of GLP. There is no distinct setup phase; GLP starts to work as soon as it receives the first (some ten) word hypotheses with a sufficient quality score. Whenever an interrupt occurs, new word hypotheses can be incorporated into the chart. These hypotheses are provided by the Speech System's word hypothesizer, either continuously or as an answer to a request by GLP, resulting from gap processing, that has the form of an incomplete word hypothesis which is to be filled. In the latter case active edges act as demons waiting for new information to be imbedded in already generated partial structures in such a way that no duplicate analysis has to be performed. Since the Speech System's overall strategy can decide when new word hypotheses are delivered, a data-driven influence on GLP's local strategy is achieved.The required input/output processes for hypotheses are performed by Listen-and Talk-Tasks, which are activated by the selector attached to the Communication agenda. The Communication selector is triggered by interrupt conditions, which are due to the mentioned overall parsing strategy. The communication channel to the outside world can be parameterized by a general feature, the Wait list. Whenever the name of a processor, e.g. Listen or Talk, is put on the Wait list, this processor is blocked until it is removed from the Wait list. Because blocking of any processor causes a redistribution of the available resources, it effects in conseq,~ence GLP's strategy. Direct influence on the parsing strategy is achieved by temporarily blocking the Syntax or Scan processors. Furthermore, the strategy can be modified explicitly by attaching a new selector to the Main agenda and by setting Various global strategy parameters. These include threshold values, e.g. for gap processing, as well as limits for resources, the most important of which is time. This flexibility in strategy variation is important for an empirical evaluation of our approach.Although we have not yet analyzed GGP's parsing complexity in general, some limiting factors for chart parsing are well known by investigations on the context free case by Sheil (1976) : The number of steps is o~ O (nD), the space requirements of 0 (n 2) independent of the parsing strategy, where n is the length of the input sentence. The size of the grammar does not influence complexity, but its branching factor, which is a measure for its degree of nondeterminism, acts as a proportionality factor.In the following we like to point out why we think that GLP's mechanism has several advantages over traditional island parsing schemes-(e.g. Woods 1976) . In order to process defective input data, the parser must be able to start its operation at any point within the chart. In general, our main parsing direction is from left to right. With respect to the expansion of islands, in particular from right to left, our mechanism is simpler, because, for example, there is no explicit representation of paths. For Syntax-Tasks, which are proceeding in the usual way from left to right, this information is already attached to their corresponding active edges. Scan-Tasks, which are seeking to the left of the island, access information attached to the vertex they are starting from. Phrase hypotheses are only generated by Syntax-Tasks; if an island cannot be expanded to the right, a Scan-Task which seeks an anchor point for an active edge to the left of the island is scheduled automatically. While in the usual island parsing schemes the focus of attention is not shifted left of an island before appropriate hypotheses are generated, (e.g. if there is a gap -of arbitrary duration -left of the island), GLP seeks for an anchor point, attaches an active edge to it and schedules a corresponding Syntax-Task. This task will then and only then generate a phrase hypothesis. Furthermore, we think that our scheme is combinatorially more efficient, because fewer hypotheses are generated.This fact results from a more adequate representation of an island's left context:In usual island parsing expansions to the left are performed without regarding the left context of the island as long as only predictions exist and no hypotheses are available.The goal of the parsing strategy we are developing now is that semantic analysis at the constituent level can be started as soon ~s a local constituent is syntactically recognized (bottom-up).The resulting semantic hypotheses, produced by the SEMANT[CS module and delivered through the Speech System's blackboard, which contain semantically based predictions, are then matched against the chart. This process will lead to the generation of new tasks, which in turn may produce new word and phrase hypotheses, so that present islands can be expanded and merged.Thanks to Prof. G. Nees, who continuously encouraged us in our work on GLP, and to Prof. K.M. Colby, Roger Parkison and Dan Christinaz of the Neuropsychiatric Institute, UCLA, where the first author learnt a lot on robust parsing during a research stay sponsored by the German Academic Exchange Service (DAAD). Appendix:
null
null
null
null
{ "paperhash": [ "hein|an_experimental_parser", "kay|syntactic_processing_and_functional_sentence_perspective", "görz|applying_a_chart_parser_to_speech_understanding", "görz|glp--the_application_of_a_chart-parser_to_speech_understanding:_u._of_erlangen-nuernberg,_frg", "goerx|glp:_a_general_linguistic_processor", "williamson|vii._references" ], "title": [ "An Experimental Parser", "Syntactic Processing and Functional Sentence Perspective", "Applying a Chart Parser to Speech Understanding", "GLP--The application of a chart-parser to speech understanding: U. of Erlangen-Nuernberg, FRG", "GLP: A General Linguistic Processor", "Vii. References" ], "abstract": [ "Uppsala Chart Processor is a linguistic processor for phonological, morphological, and syntactic analysis. It incorporates the basic mechanisms of the General Syntactic Processor. Linguistic rules as well as dictionary articles are presented to the processor in a procedural formalism, the UCP-formalism. The control of the processing rests upon the grammar and the dictionaries. Special attention has been devoted to problems concerning the interplay between different kinds of linguistic processes.", "This paper contains some ideas that nave occurred to me in the course of some preliminary work on the notion of reversible grammar. In order to make it possible to generate and analyze sentences with the same grammar, represented in the same way, I was led to consider restrictions on the expressive power of the formalism that would be acceptible only if the structures of sentences contained more information than American linguisits have usually been prepared to admit. I hope to convey to you some of my surprise and delight in finding that certain linguists of the Prague school argue for the representation of this same information on altogether different grounds.", "A distributor gear assembly includes a housing having a driving shaft and first and second distributing shafts rotatably mounted therein such that the axes of the three shafts are disposed in a common plane. An annular internal gear wheel is rotatably mounted in the housing with the three shafts passing through the annular internal gear wheel. Each of the three shafts has a gear mounted thereon with the gear on the first distributing shaft being axially offset relative to the gear on the second distributing shaft, the gear on the driving shaft meshing with the annular internal gear wheel and with the gear on the first distributing shaft and the gear on the second distributing shaft also meshing with the annular internal gear wheel.", "GLP is a general linguistic processor for the analysis and generation of natural language. It will be integrated into a speech understanding system for continuously spoken German language which is currently under development at the Computer Science Department, University of Erlangen-Nuernberg (Hein (1980) and this issue).", "GLP is a general l inguist ic processor for the analysis and generation of natural language. It is part of a speech understanding system currently under development at the Computer Science Department of our university [ 2 ] . 1. The Structure of GLP GLP is based on a second generation version of the General Syntactic Processor GSP of Kaplan and Kay [3]. I ts architecture is shown in f i g . 1. GLP uses two central data structures, chart and agenda. The chart is a directed graph which represents the utterance being analysed (or generated) together with a l l i ts component structures for any point of time of the processor's operation. For the sake of simplici ty our i l lus t ra t ion of the chart's usage is l imited to the simpler case of text processing. In this case the chart is in i t ia l i zed by a sequence of vertices which mark the start and the end of the sentence and the boundaries between words. The vertices are connected by edges which are labelled by the words themselves and lexical information (see f i g . 2.) . During processing GLP introduces more and more edges into the chart representing const i tuents, part ial derivations, etc. Processing is finished when at least one spanning edge from the f i r s t to the last vertex is found which represents a completely specified interpretation of the sentence (see f i g . 3). All edges along one path through the chart belong to the same decomposition of the sentence. Besides these inactive edges during processing there are also active ones which represent only a part of a phrase, together with an indication which kind of information would be necessary to complete i t , i .e . to make an inactive edge out of i t . The agenda is a l i s t of tasks to be carried out over the chart. Each task is the procedural incarnation of a rule of the grammer, or a part of i t . As GLP realizes a multiprocessing scheme a l l tasks can be executed independently of one another as asynchronous parallel processes. The underlying grammar is a procedural one simi l a r to an Augmented Transition Network (ATN). I ts rules contain l ingu is t ica l ly defined operators, among which are operators for the formation of structures, selectors for accessing structures, predicates for testing the appl icabi l i ty of a rule or of parts of i t , operators to cause side effects and to affect the flow of control . The formalism used is similar to that of Kay's[3] reversible grammar system in which the rules are treated as coroutines. The whole processing is controlled by a monitor which is responsible for the i n i t i a l i za t i on of the system and the generation and management of processes. It is the monitor which creates new tasks, spl i ts complex tasks into potential ly parallel executable subtasks, and maintains process and state information. The tasks themselves are executed by an interpreter (or processor) whose instruction set is the set of grammatical operators. Whenever a task sends an interrupt the monitor has to update the chart with the information sent along with it and to look at inactive edges whether there are suspended processes which would need exactly this information to be resumed. The monitor causes the selection of tasks from the agenda by means of a selector in an order determined by strategical reasons which are based on l inguist ic theory. The parsing strategy is realized by a scheduler which gives pr io r i t ies to tasks and so fixes their order of execution. Thus the strategy may be f lex ib le over the whole parsing process; top-down or bottomup processing are not characteristic for the processing as a whole but only for parts of i t . Clearly the structure of GLP does not l im i t i t s ab i l i t y to perform syntactic analysis; with a su i t able l inguis t ic data base (lexicon and grammar) it can be applied from phonological/morphological to semantic processing. The desire to break down the tradit ional borders between morphological, syntactic and semantic processing was not the least reason to choose this kind of systems architecture. Part i cularly in systems for processing continuous speech, because of uncertain data, progress in one level of analysis can only be achieved by confirmations from dif ferent levels so that a common data structure for a l l levels so that a common data structure for a l l levels of processing and a unique control structure allowing a f lex ib le strategy become essential 2. Special Features of GLP Besides an improved set of grammatical operators, GLP is able", "A comparative analysis of actuator technologies for robotics \" , At low frequencies, performance is quite good The small downward spike corresponds to the lowest impedance that could be generated on the test rig without large-motion saturation. At resonance, performance at low impedances degrades, while at larger impedances performance is still good. Above resonance, it can clearly be seen that the actuator only performs well when its output impedance has a negative real part, which corresponds to positive spring-like behavior." ], "authors": [ { "name": [ "A. S. Hein" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. Kay" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Günther Görz" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Günther Görz" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Goerx" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Matthew M. Williamson", "Peter Dillworth", "Jerry Pratt", "Karsten Ulland", "Anne Wright" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null ], "s2_corpus_id": [ "18312930", "3912970", "46516770", "10143716", "33629552", "11314774" ], "intents": [ [], [ "methodology" ], [ "background" ], [ "background" ], [ "background" ], [] ], "isInfluential": [ false, false, false, false, false, false ] }
- Problem: The paper describes GLP, a chart parser used in the Erlangen Speech Understanding System, with features for processing speech data, including direction independent island parsing, handling gaps in utterances, and a hypothesis scoring scheme. - Solution: The hypothesis of the paper is that GLP, as a multistrategy chart-parser, can effectively analyze fragmentary and defective input data, specifically in the context of speech, by incorporating features such as direction independent island parsing, gap handling, and a hypothesis scoring scheme to improve syntactic and semantic analysis.
497
0.006036
null
null
null
null
null
null
null
null
c7fa39f931792744d0fa6519355e76c1a28a4ca4
8562498
null
An Experiment on Synthesis of {R}ussian Parametric Constructions
The paper describes an experimental model of syntactic structure generation starting from the limited fragment of semantics that deals with the quantitative values of object parameters. To present the input information the basic semantic units of four types are proposed:"object", "parameter", "function" and "constant". For the syntactic structure representation the system of syntactic components is used that combines the properties of the dependency and constituent systems: the syntactic components corresponding to wordforms and exocentric constituents are introduced and two basic subordinate relations ("actant" and "attributive") are claimed to be necessary. Special attention has been devoted to problems of complex correspondence between the semantic units and lexical-syntactic means, In the process of synthesis such sections of the model as the lexicon, the syntactic structure generation rules, the set of syntactic restrictions and morphological operators are utilized to generate the considerable enough subset of Russian parametric constructions.
{ "name": [ "Kononenko, I.S. and", "Pershina, E.L." ], "affiliation": [ null, null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
5
0
null
The semantics of Russian parametric constructions deals with the quantitative values of object parameters. The parametric information is more or les~ easily explicated by means of basic semantic units of four types: "object" ('table', 'boy'), "parameter" ('weight', 'length', 'age'), "function" ('more', 'equal', 'almost equal') and "constant" ('two meters', 'from 3 to 5 years').In simple situations each of these units is separately realized in a lexeme or a phrase, their combinations forming full expressions with the given sense: malchik vesit bolshe dvadcati kilogrammov 'boy weights more than twenty kilograms'. It is precisely these direct and simple means of expressions that are usually used in systems generating natural language texts.Natural languages, however, operate with more complex means of expression ; one-to-one correspondence between semantic units and lexical items is not always the case. The complex situations are suggested here to be explained in terms of decomposition of the input semantic representation (cf. the notion of form-reduction in Bergelson and Kibrik (1980) ). This phenomenon is exemplified by such Russian lexemes as stometrovka 'hundred-meters-longdistance' which semantically incorporates the four constituents of the parametric semantics.As an ideal, a language model should embrace mechanisms that provide generation and understanding of the constructions that make use of the various possibilities of lexicalization and grammaticalization of sense. The presented model deals with some aspects-of the phenomena that have not been Considered before: all the possibilities of decomposition of the input information are taken into account and the means of syntactic structure representation are developed to provide the synthesis of the parametric syntactic structure.The paper is organized as follows. In section 2 the set of semantic components is described. In section 3 the relevant syntactic notions are introduced. In section 4 the process of synthesis is outlined, followed by conclusions in section 5.The syntactic structures of Russian parametric constructions are various enough. The full system of rules (Kononenko and Pershina, 1982) provides the generation of nominal phrases and simple sentences but the structures within the complex sentence such as komnata, dlina kotorojj ravna pjati metr~n 'room whoso length is five meters' are left out of account. So, the model allows for the following examples: shestiletnijj malchik 'six-yearsold boy'; bashnja vysotojj bolee sta metrov 'tower of more than hundred meters height'; kniga stoit pjat rublejj 'book costs five roubles' etc.To represent the syntactic structures the system of syntactic components suggested in Narinyani (1978) proved to be useful, that combines the properties of the dependency and constituent systems. ~vo different types of syntactic components, the elementary and non-elementary ones, are claimed to be necessary. The elementary component corresponds to a wordform and is traditionally represented by a lexeme symbol marked with syntactic and morphological features.The non-elementary component is composed of syntactically related elementary components. The outer syntactic relations of the non-elementary component cannot be described in terms of syntactic and morphological characteristics of the constituent elementary components. The notion of a non-elementary component is a convenient tool for describing the syntactic behaviour of Russian quantitative constructions composed of a noun and a numeral: the morphological features of the subject quantitative phrase (nominative, plural) are not equivalent to those of the nominal constituent (genitive, singular).The minimal syntactic structure that is not equal to a wordform is described in terms of a syntagm, i.e. a bipartite pattern in which syntactic components are connected by an actant or attributive syntactic relation. Each component is marked with the relevant syntactic and morphological features.The actant relation holds within the attern in which the predicate component governs the form of the actant component Y, e.g.: shirina [XJ ehkrana [Y] 'width of-screen' the governing lexeme shirina determines the genitive of the noun-actant.The attributive relation connects the component X with its syntactic modifier, or attribute, Y. The attributive synta~u is typically composed of a noun and an adjective (stometrovaja [YJ vysota [X] 'onehundred-meters height'), a noun ~id a participle, a noun and another noun, a verb and an adverb or a preposition.The syntactic relation is represented by an'%ct" or "attr" arrow leading from X to Y.The syntactic class features reflect the combinatorial properties of the components in the constructions under consideration. The following are some examples of the syntactic features:"S " -object nouns (dom 'house') obj "S " -parametric nouns param (yes %veight') "A " -possessive adjectives poss (papin 'father's')'|V f'param -parametric verbs (stoit 'to-cost') "P " -parametric participles param (vesjashhijj 'weighing') "A " -measure adjectives meas (pjatiletnijj 'five-yearsold')The syntactic structure does not contain any syntactically motivated morphological features connected with government or agreement (the latter are described separately in the morphological operators section of the model). The case of the noun used as attribute is reflected in the syntactic structure representation since this feature is relevant in distinguishing syntagms. The rules applicable to different fragments of the same decomposition are bound with the syntagmatic restrictions that prevent the unacceptable combinations of syntagms. Thu~ the combination of the syntagm (c) for {K_, K } and the adjective lexicalization of ~he ~onstant" component forms the unacceptable syntactic structure ~ehkran pjatimetrovojj shirinojj 'screen of 5-meters-long width (instr)'.The process of synthesis yields all the possible syntactic structures corresponding to the input semantic representation.
The information to-be-communicated is represented as a set of four semantic units each of them being marked with the type-symbol (o -"object", p -"parameter", f -"function", c -"constant").At the initial step of synthesis a process involving the decomposition of the input semantic structure into a system of semantic components takes place. Usually, a semantic structure corresponds to several decompositions. The forming of a component may be motivated by the following reasons.In the event of separate lexicalization a componen~ represents exac~±y one semantic unit. There are four components of this kind according to the number of unit types. So, the object component K o represents a unit of the "object" type and is realized in a noun (dom 'house') or a possessive adjective (papin 'father's'). A component represents more than one semantic unit in two situations.(1) The first one has been mentioned above. It concerns the phenomenon of incorporation of several units in one lexeme: thus, the component Kopfc is introduced to account for the lexemes like stometrovka and Kpf component is a prototype of parametric-comparative adverbs like shire 'wider'.(2) On the other hand, the introduction of a component may be connected with the fact that a certain unit is not lexicalized at all. Such "reduced" elements of sense are considered to be realized on the surface by the type of the syntactic structure composed of the lexicalized units of the component. For example, in Russian approximative constructions litrov pjat 'about-five-liters' it is only the "constant" unit that is lexicalized and the unit of the "function" type ('almost equal) is expressed by purelysyntactic means, i.e. the inverted word-order in the quantitative phrase. The corresponding component represents both the "function" and "constant" units.
null
The first step of synthesis is the decomposition of the input semantic representation into the set of semantic components. The possibilities of lexicalization of components are determined by the lexicon that provides every lexeme with its semantic prototype -the set of semantic units incorporated in the meaning of the lexeme. The lexicalization rules replace the semantic components b~ the concrete lexemes, e.g.:'weight' ~K~ is replaced P by the lexemes yes IS ~ ~, vesit [V .... ] or vesjashhijj [Pparl]~ ~ The semantic types of components determine their combinatorial properties on the syntactic level. T~le grammar is developed as the set of rules each of which provides all the syntagms realizing the initial pair of components. In this report on the basis of the very limited data of the parametric constructions an attempt has been made to consider a simplified model of synthesis of the text expression beginning from the given semantic representation. The scheme presented above is planned to be implemented within the framework of the questionanswering system.Right from the start of synthesis the process of decomposition of the input semantics takes place in order to capture different cases of complex correspondence between the semantic units and the lexical -syntactic means. To generate the considerable enough subset of Russian parametric constructions such sections of the language model as the lexicon, the grammar generating the syntactic structures, the set of syntactic restictions and morphological operators are utilized. The listed constituents, however, do not, exhaust all the necessary mechanism of synthesis since the problems of word-order are left to be investigated and an additional reference to various aspects of the communicative setting is required. We believe that being of primary ~nportance for automatic synthesis of natural language texts the communicative aspect of text generation presents one of the mo~t promising research directions for future a~tivity.
null
Main paper: se~iantic components: The information to-be-communicated is represented as a set of four semantic units each of them being marked with the type-symbol (o -"object", p -"parameter", f -"function", c -"constant").At the initial step of synthesis a process involving the decomposition of the input semantic structure into a system of semantic components takes place. Usually, a semantic structure corresponds to several decompositions. The forming of a component may be motivated by the following reasons.In the event of separate lexicalization a componen~ represents exac~±y one semantic unit. There are four components of this kind according to the number of unit types. So, the object component K o represents a unit of the "object" type and is realized in a noun (dom 'house') or a possessive adjective (papin 'father's'). A component represents more than one semantic unit in two situations.(1) The first one has been mentioned above. It concerns the phenomenon of incorporation of several units in one lexeme: thus, the component Kopfc is introduced to account for the lexemes like stometrovka and Kpf component is a prototype of parametric-comparative adverbs like shire 'wider'.(2) On the other hand, the introduction of a component may be connected with the fact that a certain unit is not lexicalized at all. Such "reduced" elements of sense are considered to be realized on the surface by the type of the syntactic structure composed of the lexicalized units of the component. For example, in Russian approximative constructions litrov pjat 'about-five-liters' it is only the "constant" unit that is lexicalized and the unit of the "function" type ('almost equal) is expressed by purelysyntactic means, i.e. the inverted word-order in the quantitative phrase. The corresponding component represents both the "function" and "constant" units. syntactic structures: The syntactic structures of Russian parametric constructions are various enough. The full system of rules (Kononenko and Pershina, 1982) provides the generation of nominal phrases and simple sentences but the structures within the complex sentence such as komnata, dlina kotorojj ravna pjati metr~n 'room whoso length is five meters' are left out of account. So, the model allows for the following examples: shestiletnijj malchik 'six-yearsold boy'; bashnja vysotojj bolee sta metrov 'tower of more than hundred meters height'; kniga stoit pjat rublejj 'book costs five roubles' etc.To represent the syntactic structures the system of syntactic components suggested in Narinyani (1978) proved to be useful, that combines the properties of the dependency and constituent systems. ~vo different types of syntactic components, the elementary and non-elementary ones, are claimed to be necessary. The elementary component corresponds to a wordform and is traditionally represented by a lexeme symbol marked with syntactic and morphological features.The non-elementary component is composed of syntactically related elementary components. The outer syntactic relations of the non-elementary component cannot be described in terms of syntactic and morphological characteristics of the constituent elementary components. The notion of a non-elementary component is a convenient tool for describing the syntactic behaviour of Russian quantitative constructions composed of a noun and a numeral: the morphological features of the subject quantitative phrase (nominative, plural) are not equivalent to those of the nominal constituent (genitive, singular).The minimal syntactic structure that is not equal to a wordform is described in terms of a syntagm, i.e. a bipartite pattern in which syntactic components are connected by an actant or attributive syntactic relation. Each component is marked with the relevant syntactic and morphological features.The actant relation holds within the attern in which the predicate component governs the form of the actant component Y, e.g.: shirina [XJ ehkrana [Y] 'width of-screen' the governing lexeme shirina determines the genitive of the noun-actant.The attributive relation connects the component X with its syntactic modifier, or attribute, Y. The attributive synta~u is typically composed of a noun and an adjective (stometrovaja [YJ vysota [X] 'onehundred-meters height'), a noun ~id a participle, a noun and another noun, a verb and an adverb or a preposition.The syntactic relation is represented by an'%ct" or "attr" arrow leading from X to Y.The syntactic class features reflect the combinatorial properties of the components in the constructions under consideration. The following are some examples of the syntactic features:"S " -object nouns (dom 'house') obj "S " -parametric nouns param (yes %veight') "A " -possessive adjectives poss (papin 'father's')'|V f'param -parametric verbs (stoit 'to-cost') "P " -parametric participles param (vesjashhijj 'weighing') "A " -measure adjectives meas (pjatiletnijj 'five-yearsold')The syntactic structure does not contain any syntactically motivated morphological features connected with government or agreement (the latter are described separately in the morphological operators section of the model). The case of the noun used as attribute is reflected in the syntactic structure representation since this feature is relevant in distinguishing syntagms. The rules applicable to different fragments of the same decomposition are bound with the syntagmatic restrictions that prevent the unacceptable combinations of syntagms. Thu~ the combination of the syntagm (c) for {K_, K } and the adjective lexicalization of ~he ~onstant" component forms the unacceptable syntactic structure ~ehkran pjatimetrovojj shirinojj 'screen of 5-meters-long width (instr)'.The process of synthesis yields all the possible syntactic structures corresponding to the input semantic representation. structure generation 5 conclusion: The first step of synthesis is the decomposition of the input semantic representation into the set of semantic components. The possibilities of lexicalization of components are determined by the lexicon that provides every lexeme with its semantic prototype -the set of semantic units incorporated in the meaning of the lexeme. The lexicalization rules replace the semantic components b~ the concrete lexemes, e.g.:'weight' ~K~ is replaced P by the lexemes yes IS ~ ~, vesit [V .... ] or vesjashhijj [Pparl]~ ~ The semantic types of components determine their combinatorial properties on the syntactic level. T~le grammar is developed as the set of rules each of which provides all the syntagms realizing the initial pair of components. In this report on the basis of the very limited data of the parametric constructions an attempt has been made to consider a simplified model of synthesis of the text expression beginning from the given semantic representation. The scheme presented above is planned to be implemented within the framework of the questionanswering system.Right from the start of synthesis the process of decomposition of the input semantics takes place in order to capture different cases of complex correspondence between the semantic units and the lexical -syntactic means. To generate the considerable enough subset of Russian parametric constructions such sections of the language model as the lexicon, the grammar generating the syntactic structures, the set of syntactic restictions and morphological operators are utilized. The listed constituents, however, do not, exhaust all the necessary mechanism of synthesis since the problems of word-order are left to be investigated and an additional reference to various aspects of the communicative setting is required. We believe that being of primary ~nportance for automatic synthesis of natural language texts the communicative aspect of text generation presents one of the mo~t promising research directions for future a~tivity. introduction: The semantics of Russian parametric constructions deals with the quantitative values of object parameters. The parametric information is more or les~ easily explicated by means of basic semantic units of four types: "object" ('table', 'boy'), "parameter" ('weight', 'length', 'age'), "function" ('more', 'equal', 'almost equal') and "constant" ('two meters', 'from 3 to 5 years').In simple situations each of these units is separately realized in a lexeme or a phrase, their combinations forming full expressions with the given sense: malchik vesit bolshe dvadcati kilogrammov 'boy weights more than twenty kilograms'. It is precisely these direct and simple means of expressions that are usually used in systems generating natural language texts.Natural languages, however, operate with more complex means of expression ; one-to-one correspondence between semantic units and lexical items is not always the case. The complex situations are suggested here to be explained in terms of decomposition of the input semantic representation (cf. the notion of form-reduction in Bergelson and Kibrik (1980) ). This phenomenon is exemplified by such Russian lexemes as stometrovka 'hundred-meters-longdistance' which semantically incorporates the four constituents of the parametric semantics.As an ideal, a language model should embrace mechanisms that provide generation and understanding of the constructions that make use of the various possibilities of lexicalization and grammaticalization of sense. The presented model deals with some aspects-of the phenomena that have not been Considered before: all the possibilities of decomposition of the input information are taken into account and the means of syntactic structure representation are developed to provide the synthesis of the parametric syntactic structure.The paper is organized as follows. In section 2 the set of semantic components is described. In section 3 the relevant syntactic notions are introduced. In section 4 the process of synthesis is outlined, followed by conclusions in section 5. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
497
0
null
null
null
null
null
null
null
null
481e4c51d4d2caca0271f832cd6e826d3ba4bfe3
7127425
null
Fallible Rationalism and Machine Translation
Approaches to MT have been heavily influenced by changing trends in the philosophy of language and mind.
{ "name": [ "Sampson, Geoffrey" ], "affiliation": [ null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
7
3
null
followed the publication of the ALPAC Report, MT research in the 197Os and early 198Os has had to catch up with major developments that have occurred in linguistic and philosophical thinking; currently, MT seems to be uncritically loyal to a paradigm of thought about language which is rapidly losing most of its adherents in departments of linguistics and philosophy.I argue, both in theoretical terms and by reference to empirical research on a particular translation problem, that the Popperian "fallible rationalist" view of mental processes which is winning acceptance as a more sophisticated alternative to Chomskyan "deterministic rationalism" should lead MT researchers to redefine their goals and to adopt certain currently-neglected techniques in trying to achieve those goals.Since the Second World War, three rival views of the nature of the human mind have competed for the allegiance of philosophically-minded people. Each of these views has implications for our understanding of language.The 195Os and early 1960s were dominated by s behaviourist approach tracing its ancestry to John Locke and represented recently e.g. by Leonard Bloomfield and B.F. Skinner.On this view, "mind" is merely a name for a set of associations that have been established during a person's life between external stimuli and behavioural responses. The meaning of a sentence is to be understood not as the effect it has on an unobservable internal model of reality but as the behaviour it evokes in the hearer.During the 1960s this view lost ground to the rationalist ideas of Noam Chomsky, working in an intellectual tradition founded by Plato and reinaugurated in modern times by Hone Descartes. On this view, stimuli and responses are linked only indirectly, via an immensely complex cognitive mechanism having J ts own fixed principles of operation which are independent of experience.A given behaviour is a response to an internal mental event which is determined as the resultant of the initial state of the mental apparatus together with the entire history of inputs to it.The meaning of a sentence must be explained in terms of the unseen responses it evokes in the cognitive apparatus, which might take the form of successive modifications of an internal model of reality that could be described as "inferencing".Chomskyan rationalism is undoubtedly more satisfactory as an account of human cognition than Skinnerian behaviourism.By the late 197Os, however, the mechanical determinism that is part of Chomsky's view of mind appeared increasingly unrealistic to many writers.There is little empirical support, for instance, for the Chomskyan assumptions that the child's acquisition of his first language, or the adult's comprehension of a given utterance, are processes that reach well-defined terminations after a given period of mental processing --language seems typically to work in a more "open-ended" fashion than that. Within linguistics, as documented e.g. by Moore ~ Carling (1982) , the ChomsMyan paradi~ is hy now widely rejected.The view which is winning widespread acceptance as preserving the merits of rationalism while avoiding its inadequacies is Karl Pepper's falllbilist version of the doctrine.On this account, the mind responds to experiential inputs not by a deterministic algorithm that reaches a halt state, but by creatively formulating fallible conjectures which experience is used to test. Typically the conjectures formulated are radically novel, in the sense that they could not be predicted even on the basis of ideally complete knowledge of the person's prior state. This version of rationalism is incompatible with the materialist doctrine that the mind is nothing but an arrangement of matter and wholly governed by the laws of physics; but, historically, materialism has not commonly been regarded as an axiom requiring no argument to support it (although it may be that the ethos of Artificial Intelligence makes practitioners of this discipline more than averagely favourable towards materialism).As a matter of logic, fallible conjectures in any domain can be eliminated by adverse experience but can never be decisively confirmed. Our reaction to linguistic experience, consequently~ is for a Popperian both non-deterministic and open-ended.There is no reason to expect a person at any age to cease to improve his knowledge of his mother-tongue, or to expect different members of a speech-community to formulate identical internalized grammars; and understanding an individual utterance is a process which a person can execute to any desired degree of thoroughness -we stop trying to improve our understanding of a particular sample of language not because we reach a natural stopping-place but because we judge that the returns from further effort are likely to be less than the resources invested.For a Chomskyan linguist, divergences between individuals in their linguistic behaviour are to be explained either in terms of mixture of "dialects" or in terms of failure of practical "performance" fully to match the abstract "competence" possessed by the mature speaker.For the Popperian such divergences require no explanation; we do not possess algorithms which would lead to correct results if they were executed thoroughly.Indeed, since languages have no reality independent of their speakers, the idea that there exists a "correct" solution to the problem of acquiring a language or of understanding an individual sentence ceases to apply except as an untheoretical approximation.The superiority of the Popperian to the Chomskyan paradigm as a framework for interpreting the facts of linguistic behaviour is argued e.g. in my Making Sense 1980, Popperian Linguistics (in press).There is a major difference in style between the MT of the 1950s and 1960s, and the projects of the last decade. This reflects the difference between behaviourist and deterministic-rationalist paradigms.Speaking very broadly, early MT research envisaged the problem of translation as that of establishing equivalences between observable, surface features of languages: vocabulary items, taxemes of order, and the like.Recent MT research has taken it as axiomatic that successful MT must incorporate a large AI component. Human translation, it is now realized, involves the understanding of source texts rather than mere transliteration from one set of linguistic conventions to another:we make heavy use of inferencing in order to resolve textual ambiguities. MT systems must therefore simulate these inferencing processes in order to produce human-like output.Furthermore, the Chomskyan paradigm incorporates axioms about the kinds of operation characteristic of human linguistic processing, and MT research inherits these.In particular, Chomsky and his followers have been hostile to the idea that any interesting linguistic rules or processes might be probabilistic or statistical in rmture (e.g. Chomsky 1957: 15-17, and of. the controversy about Labovian "variable rules").The assumption that human language-processing is invariably an all-or-none phenomenon might well be questioned even by someone who subscribed to the other tenets Of the Chomskyan paradigm (e.g. Suppes 1970 ), but it is consistent with the heavily deterministic flavour of that paradigm.Correspondingly, recent MT projects known to me seem to make no use of probabilities, and anecdotal evidence suggests that MT (and other AI) researchers perceive proposals for the exploitation of probabilistic techniques as defeatist ("We ought to be modelling what the mind actually does rather than using purely artificial methods to achieve a rough approximation to its output").What are the implications for MT, and for AI in general, of a shift from a deterministic to a fallibilist version of rationalism?(On the general issue see e.g. the exchange between Aravind Joshi and me in Smith 1982.) They can be summed Up as follows.First, there is no such thing as an ideal speaker's competence which, if simulated mechanically, would constitute perfect MT.In the case of "literary" texts it is generally recognised that different human translators may produce markedly different translations none of which can be considered more "correct" than the others; from the Popperian viewpoint literary texts do not differ qualitatively from other genres.(Referring to the translation requirements of the Secretariat of the Council of the European Communities, P.J. Arthern (1979: 81) has said that "the only quality we can accept is i00~0 fidelity to the meaning of the original".From the fallibilist point of view that is like saying "the only kind of motors we are willing to use are perpetual-motion machines".)Second, there is no possibility of designing an artificial system which simulates the actions of an unpredictably creative mind, since any machine is a material object governed by physical law.Thus it will not, for instance, be possible to design an artificial system which regularly uses inferencing to resolve the meaning of given texts in the same way as a human reader of the texts.There is no principled barrier, of course, to an artificial system which applies logical transformations to derive conclusions from ~iven premisses.But an artificial system must be restricted to some fixed, perhaps very large, database of premisses ("world knowledge").It is central to the Popperian view of mind that human inferencing is not limited to a fixed set of premisses but involves the frequent invention of new hypotheses which are not related in any logical way to the previous contents of mind.An MT system cannot aspire to perfect human performance. (But then, neither can a human.)a situation in which the behaviour of any individual is only approximately similar to that of other individuals and is not in detail predictable even in principle is just the kind of situation in which probabilistic techniques are valuable, irrespective of whether or not the processes occurring within individual humans are themselves intrinsically probabilistic.To draw an analogy: life-insurance companies do not condemn the actuarial profession as a bunch of copouts because they do not attempt to predict the precise date of death of individual policyholders. MT research ought to exploit any techniques that offer the possibility of better approximations to acceptable translation, whether or not it seems likely that human translation exploits such techniques; and it is likely that useful methods will often be probabilistic.MT researchers will ultimately need to appreciate that there is no natural end to the process of improving the quality of translation (though it may be premature to raise this issue at a stage when the best mechanical translation is still quite bad). Human translation always involves a (usually tacit) cost-benefit analysis: it is never a question of "How much work is needed to translate this text 'properly'?" but of "Will a given increment of effort be profitable in terms of achieved improvement in translation?" Likewise, the question confronting MT is not "Is MT possible?" but "What are the disbenefits Of translating this or that category of texts at this or that level of inexactness, and how do the costs of reducing the incidence of a given type of error compare with the gains to the consumers?"The value of probabilistic techniques is sufficiently exemplified by the spectacular success of the Lancaster-Oslo-Bergen Tagging System (see e.g. Leech et al. 1983) .The LOB Tagging System, operational since 1981, assigns grammatical tags drawn from a highly-differentiated (134member) tag-set to the words of "real-life" English text.The system "knows" virtually nothing of the syntax of English in terms of the kind of grammar-rules believed by linguists to make up the speaker's competence; it uses only facts about local transition-probabilities between formclasses, together with the relatively meagre clues provided by English morphology.By late 1982 the output of the system fell short of complete success (defined as tagging identical to that done independently by a human linguist) by only 3.4%. Various methods are being used to reduce this failure-rate further, but the nature of the techniques used ensures that the ideal of 100% success will be approached only asymptotically.However, the point is that no other extant automatic tagging-system known to me approaches the current success-level of the LOB system. I predict that any system which eschews probabilistic methods will perform at a significantly lower level.In the remainder of this paper I illustrate the argument that human language-comprehension involves inferencing from unpredictable hypotheses, using research of my own on the problem of "referring" pronouns.My research was done in reaction to an article by Jerry Hobbs (1976) . Hobbs provides an unusually clear example of the Chomskyan paradigm of AI research, since he makes his methodological axioms relatively explicit. He begins by defining a complex and subtle algorithm for referring pronouns which depends exclusively on the grammatical structure of the sentences in which they occur. This algorithm is highly successful:tested on a sample of texts, it is 88.3% accurate (a figure which rises slightly, to 91.7%, when the algorithm is expanded to use the simple kind of semantic information represented by Katz/Fodor "selection restrictions"). Nevertheless, Hobbs argues that this approach to the problem of pronoun resolution must be abandoned in favour of a "semantic algorithm", meaning one which depends on inferencing from a d@ta-base of world knowledge rather than on syntactic structure.He gives several reasons; the important reasons are that the syntactic approach can never attain lOOTo success, and that it does not correspond to the method by which humans resolve pronouns.However, unlike Hobbs's syntactic algorithm, his semantic algorithm is purely programmatic. The implication that it will be able to achieve i00~ success --or even that it will be able to match the success-level of the existing syntactic algorithm --rests purely on faith, though this faith is quite understandable given the axioms of deterministic rationalism. I investigated these issues by examining a set of examples of the pronoun it drawn from the LOB Corpus (a standard million-word computer-readable corpus of modern written British English -see Johansson 1978) .The pronoun it is specially interesting in connexion with MT because of the problems of translation into gender-langu/ages; my examples were extracted from the texts in Category H of the LOB Corpus, which includes governmental and similar documents and thus matches the genres which current large-scale MT projects such as EUROTRA aim to translate.I began with 338 instances of it; after eliminating non-referential cases I was left with 156 instances which I examined intensively.I asked the following questions:(i)In what proportion of cases do I as an educated native speaker feel confident about the intended reference?....(2) Where I do feel confident and Hobbs's syntactic algorithm gives a result which I believe to be wrong, what kind of reasoning enabled me to reach my solution?(3) Where Hobbs's algorithm gives what I believe to be the correct result, is it plausible that a semantic algorithm would give the same result?(4) Could the performance of Hobbs's syntactic algorithm be improved, as an alternative to replacing it by a semantic algorithm? It emerged that:(i)In about I0~ of all cases, human resolution was impossible; on careful consideration of the alternatives I concluded that I did not know the intended reference (even though, on a first relatively cursory reading, most of these cases had not struck me as ambiguous). An example is:The lower platen, which supports the leather, is raised hydraulically to bring it into contact with the rollers on the upper platen ... (H6.148) Does it refer to the lower platen or to the leather (la platina, il cuoio:)?I really don't know.In at least one instance (not this one) I reached different confident conclusions about the same case on different occasions (and this suggests that there are likely to be other cases which I have confidently resolved in ways other than the writer intended).The implication is that a system which performs at a level of success much above 90~ on the task of resolving referential it would be outperforming a human, which is contradictory: language means what humans take it to mean.( 2)In a number of cases where I judged the syntactic algorithm to give the wrong result, the premisses on which my own decisions were based were propositions that were not pieces of factual general knowledge and which I was not aware of ever having consciously entertained before producing them in the course of trying to interpret the text in question.It would therefore be quixotic to suggest that these propositions would occur in the data-base available to a future MT system.Consider, for instance:Under the "permissive" powers, however, in the worst cases when the Ministry was right and the M.P. was right the local authority could still dig its heels in and say that whatever the Ministry said it was not going to give a grant.(HI6.I feel sure that i_~t refers to the local authority rather than the Ministry, chiefly because it seems to me much more plausible that a lower-level branch of government would refuse to heed requests for action from a higher-level branch than that it would accuse the higher-level branch of deceit. But this generalization about the sociology of government was new to me when I thought it up for the purpose of interpreting the example quoted (and I am not certain that it is in fact Universally true).(3) In a number of cases it was very difficult to believe that introduction Of semantic considerations into the syntactic algorithm would not worsen its performance.Here, an example is:... and the Isle of Man. We do by these Presents for Us, our Heirs and Successors institute and create a new Medal and We do hereby direct that i__~t shall be governed by the following rules and ordinances ... (H24.16)Hobbs's syntactic algorithm refers it to Medal, I believe rightly.Yet before reading the text I was under the impression that medals, like other small concrete inanimate objects, could not be governed; while territories like the Isle of Man can be, and indeed are.Syntax is more important than semantics in this case.(4) There are several syntactic phenomena (e.g. parallelism of structure between successive clauses) which turned out to be relevant to pronoun resolution but which are ignored by Hobbs's algorithm.I have not undertaken the task of modifying the syntactic algorithm in order to exploit these phenomena, but it seems likely that the already-good performance of the algorithm could be further improved.It is also worth pointing out that accepting the legitimacy of probabilistic methods allows one to exploit many crude (and therefore cheaply-exploited) semantic considerations, such as Katz/ Fodor selection restrictions, which have to be left out of a deterministic system because in practice they are sometimes violated.As we have seen, Hobbs suggested that only a small percentage improvement in the performance of his pure syntactic algorithm could be achieved by adding semantic selection restrictions.Rules such as "the verb 'fear' must have an [+animate] subject" almost never prove to be exceptionless in real-life usage: even genres of text that appear soberly literal contain many cases of figurative or extended usage. This is one reason why advocates of a "semantic" approach to artificial language-processing believe in using relatively elaborate methods involving complex inferential chains --though they give us little reason to expect that these techniques too will not in practice be bedevilled by difficulties similar to those that occur with straightforward selection restrictions.However, while it may be that the subject of 'fear' is not always an animate noun, it may also be that this is true with much more than chance frequency.If so, an artificial language-processing system can and should use this as one factor to be balanced against others in resolving ambiguities in sentences containing 'fear'.To sum up: the deterministic-rationalist philosophical paradi~ has encouraged MT researchers to attempt an impossible task. The falliblerationalist paradigm requires them to lower their sights, but may at the same time allow them to attain greater actual success.
null
null
null
null
Main paper: : followed the publication of the ALPAC Report, MT research in the 197Os and early 198Os has had to catch up with major developments that have occurred in linguistic and philosophical thinking; currently, MT seems to be uncritically loyal to a paradigm of thought about language which is rapidly losing most of its adherents in departments of linguistics and philosophy.I argue, both in theoretical terms and by reference to empirical research on a particular translation problem, that the Popperian "fallible rationalist" view of mental processes which is winning acceptance as a more sophisticated alternative to Chomskyan "deterministic rationalism" should lead MT researchers to redefine their goals and to adopt certain currently-neglected techniques in trying to achieve those goals.Since the Second World War, three rival views of the nature of the human mind have competed for the allegiance of philosophically-minded people. Each of these views has implications for our understanding of language.The 195Os and early 1960s were dominated by s behaviourist approach tracing its ancestry to John Locke and represented recently e.g. by Leonard Bloomfield and B.F. Skinner.On this view, "mind" is merely a name for a set of associations that have been established during a person's life between external stimuli and behavioural responses. The meaning of a sentence is to be understood not as the effect it has on an unobservable internal model of reality but as the behaviour it evokes in the hearer.During the 1960s this view lost ground to the rationalist ideas of Noam Chomsky, working in an intellectual tradition founded by Plato and reinaugurated in modern times by Hone Descartes. On this view, stimuli and responses are linked only indirectly, via an immensely complex cognitive mechanism having J ts own fixed principles of operation which are independent of experience.A given behaviour is a response to an internal mental event which is determined as the resultant of the initial state of the mental apparatus together with the entire history of inputs to it.The meaning of a sentence must be explained in terms of the unseen responses it evokes in the cognitive apparatus, which might take the form of successive modifications of an internal model of reality that could be described as "inferencing".Chomskyan rationalism is undoubtedly more satisfactory as an account of human cognition than Skinnerian behaviourism.By the late 197Os, however, the mechanical determinism that is part of Chomsky's view of mind appeared increasingly unrealistic to many writers.There is little empirical support, for instance, for the Chomskyan assumptions that the child's acquisition of his first language, or the adult's comprehension of a given utterance, are processes that reach well-defined terminations after a given period of mental processing --language seems typically to work in a more "open-ended" fashion than that. Within linguistics, as documented e.g. by Moore ~ Carling (1982) , the ChomsMyan paradi~ is hy now widely rejected.The view which is winning widespread acceptance as preserving the merits of rationalism while avoiding its inadequacies is Karl Pepper's falllbilist version of the doctrine.On this account, the mind responds to experiential inputs not by a deterministic algorithm that reaches a halt state, but by creatively formulating fallible conjectures which experience is used to test. Typically the conjectures formulated are radically novel, in the sense that they could not be predicted even on the basis of ideally complete knowledge of the person's prior state. This version of rationalism is incompatible with the materialist doctrine that the mind is nothing but an arrangement of matter and wholly governed by the laws of physics; but, historically, materialism has not commonly been regarded as an axiom requiring no argument to support it (although it may be that the ethos of Artificial Intelligence makes practitioners of this discipline more than averagely favourable towards materialism).As a matter of logic, fallible conjectures in any domain can be eliminated by adverse experience but can never be decisively confirmed. Our reaction to linguistic experience, consequently~ is for a Popperian both non-deterministic and open-ended.There is no reason to expect a person at any age to cease to improve his knowledge of his mother-tongue, or to expect different members of a speech-community to formulate identical internalized grammars; and understanding an individual utterance is a process which a person can execute to any desired degree of thoroughness -we stop trying to improve our understanding of a particular sample of language not because we reach a natural stopping-place but because we judge that the returns from further effort are likely to be less than the resources invested.For a Chomskyan linguist, divergences between individuals in their linguistic behaviour are to be explained either in terms of mixture of "dialects" or in terms of failure of practical "performance" fully to match the abstract "competence" possessed by the mature speaker.For the Popperian such divergences require no explanation; we do not possess algorithms which would lead to correct results if they were executed thoroughly.Indeed, since languages have no reality independent of their speakers, the idea that there exists a "correct" solution to the problem of acquiring a language or of understanding an individual sentence ceases to apply except as an untheoretical approximation.The superiority of the Popperian to the Chomskyan paradigm as a framework for interpreting the facts of linguistic behaviour is argued e.g. in my Making Sense 1980, Popperian Linguistics (in press).There is a major difference in style between the MT of the 1950s and 1960s, and the projects of the last decade. This reflects the difference between behaviourist and deterministic-rationalist paradigms.Speaking very broadly, early MT research envisaged the problem of translation as that of establishing equivalences between observable, surface features of languages: vocabulary items, taxemes of order, and the like.Recent MT research has taken it as axiomatic that successful MT must incorporate a large AI component. Human translation, it is now realized, involves the understanding of source texts rather than mere transliteration from one set of linguistic conventions to another:we make heavy use of inferencing in order to resolve textual ambiguities. MT systems must therefore simulate these inferencing processes in order to produce human-like output.Furthermore, the Chomskyan paradigm incorporates axioms about the kinds of operation characteristic of human linguistic processing, and MT research inherits these.In particular, Chomsky and his followers have been hostile to the idea that any interesting linguistic rules or processes might be probabilistic or statistical in rmture (e.g. Chomsky 1957: 15-17, and of. the controversy about Labovian "variable rules").The assumption that human language-processing is invariably an all-or-none phenomenon might well be questioned even by someone who subscribed to the other tenets Of the Chomskyan paradigm (e.g. Suppes 1970 ), but it is consistent with the heavily deterministic flavour of that paradigm.Correspondingly, recent MT projects known to me seem to make no use of probabilities, and anecdotal evidence suggests that MT (and other AI) researchers perceive proposals for the exploitation of probabilistic techniques as defeatist ("We ought to be modelling what the mind actually does rather than using purely artificial methods to achieve a rough approximation to its output").What are the implications for MT, and for AI in general, of a shift from a deterministic to a fallibilist version of rationalism?(On the general issue see e.g. the exchange between Aravind Joshi and me in Smith 1982.) They can be summed Up as follows.First, there is no such thing as an ideal speaker's competence which, if simulated mechanically, would constitute perfect MT.In the case of "literary" texts it is generally recognised that different human translators may produce markedly different translations none of which can be considered more "correct" than the others; from the Popperian viewpoint literary texts do not differ qualitatively from other genres.(Referring to the translation requirements of the Secretariat of the Council of the European Communities, P.J. Arthern (1979: 81) has said that "the only quality we can accept is i00~0 fidelity to the meaning of the original".From the fallibilist point of view that is like saying "the only kind of motors we are willing to use are perpetual-motion machines".)Second, there is no possibility of designing an artificial system which simulates the actions of an unpredictably creative mind, since any machine is a material object governed by physical law.Thus it will not, for instance, be possible to design an artificial system which regularly uses inferencing to resolve the meaning of given texts in the same way as a human reader of the texts.There is no principled barrier, of course, to an artificial system which applies logical transformations to derive conclusions from ~iven premisses.But an artificial system must be restricted to some fixed, perhaps very large, database of premisses ("world knowledge").It is central to the Popperian view of mind that human inferencing is not limited to a fixed set of premisses but involves the frequent invention of new hypotheses which are not related in any logical way to the previous contents of mind.An MT system cannot aspire to perfect human performance. (But then, neither can a human.)a situation in which the behaviour of any individual is only approximately similar to that of other individuals and is not in detail predictable even in principle is just the kind of situation in which probabilistic techniques are valuable, irrespective of whether or not the processes occurring within individual humans are themselves intrinsically probabilistic.To draw an analogy: life-insurance companies do not condemn the actuarial profession as a bunch of copouts because they do not attempt to predict the precise date of death of individual policyholders. MT research ought to exploit any techniques that offer the possibility of better approximations to acceptable translation, whether or not it seems likely that human translation exploits such techniques; and it is likely that useful methods will often be probabilistic.MT researchers will ultimately need to appreciate that there is no natural end to the process of improving the quality of translation (though it may be premature to raise this issue at a stage when the best mechanical translation is still quite bad). Human translation always involves a (usually tacit) cost-benefit analysis: it is never a question of "How much work is needed to translate this text 'properly'?" but of "Will a given increment of effort be profitable in terms of achieved improvement in translation?" Likewise, the question confronting MT is not "Is MT possible?" but "What are the disbenefits Of translating this or that category of texts at this or that level of inexactness, and how do the costs of reducing the incidence of a given type of error compare with the gains to the consumers?"The value of probabilistic techniques is sufficiently exemplified by the spectacular success of the Lancaster-Oslo-Bergen Tagging System (see e.g. Leech et al. 1983) .The LOB Tagging System, operational since 1981, assigns grammatical tags drawn from a highly-differentiated (134member) tag-set to the words of "real-life" English text.The system "knows" virtually nothing of the syntax of English in terms of the kind of grammar-rules believed by linguists to make up the speaker's competence; it uses only facts about local transition-probabilities between formclasses, together with the relatively meagre clues provided by English morphology.By late 1982 the output of the system fell short of complete success (defined as tagging identical to that done independently by a human linguist) by only 3.4%. Various methods are being used to reduce this failure-rate further, but the nature of the techniques used ensures that the ideal of 100% success will be approached only asymptotically.However, the point is that no other extant automatic tagging-system known to me approaches the current success-level of the LOB system. I predict that any system which eschews probabilistic methods will perform at a significantly lower level.In the remainder of this paper I illustrate the argument that human language-comprehension involves inferencing from unpredictable hypotheses, using research of my own on the problem of "referring" pronouns.My research was done in reaction to an article by Jerry Hobbs (1976) . Hobbs provides an unusually clear example of the Chomskyan paradigm of AI research, since he makes his methodological axioms relatively explicit. He begins by defining a complex and subtle algorithm for referring pronouns which depends exclusively on the grammatical structure of the sentences in which they occur. This algorithm is highly successful:tested on a sample of texts, it is 88.3% accurate (a figure which rises slightly, to 91.7%, when the algorithm is expanded to use the simple kind of semantic information represented by Katz/Fodor "selection restrictions"). Nevertheless, Hobbs argues that this approach to the problem of pronoun resolution must be abandoned in favour of a "semantic algorithm", meaning one which depends on inferencing from a d@ta-base of world knowledge rather than on syntactic structure.He gives several reasons; the important reasons are that the syntactic approach can never attain lOOTo success, and that it does not correspond to the method by which humans resolve pronouns.However, unlike Hobbs's syntactic algorithm, his semantic algorithm is purely programmatic. The implication that it will be able to achieve i00~ success --or even that it will be able to match the success-level of the existing syntactic algorithm --rests purely on faith, though this faith is quite understandable given the axioms of deterministic rationalism. I investigated these issues by examining a set of examples of the pronoun it drawn from the LOB Corpus (a standard million-word computer-readable corpus of modern written British English -see Johansson 1978) .The pronoun it is specially interesting in connexion with MT because of the problems of translation into gender-langu/ages; my examples were extracted from the texts in Category H of the LOB Corpus, which includes governmental and similar documents and thus matches the genres which current large-scale MT projects such as EUROTRA aim to translate.I began with 338 instances of it; after eliminating non-referential cases I was left with 156 instances which I examined intensively.I asked the following questions:(i)In what proportion of cases do I as an educated native speaker feel confident about the intended reference?....(2) Where I do feel confident and Hobbs's syntactic algorithm gives a result which I believe to be wrong, what kind of reasoning enabled me to reach my solution?(3) Where Hobbs's algorithm gives what I believe to be the correct result, is it plausible that a semantic algorithm would give the same result?(4) Could the performance of Hobbs's syntactic algorithm be improved, as an alternative to replacing it by a semantic algorithm? It emerged that:(i)In about I0~ of all cases, human resolution was impossible; on careful consideration of the alternatives I concluded that I did not know the intended reference (even though, on a first relatively cursory reading, most of these cases had not struck me as ambiguous). An example is:The lower platen, which supports the leather, is raised hydraulically to bring it into contact with the rollers on the upper platen ... (H6.148) Does it refer to the lower platen or to the leather (la platina, il cuoio:)?I really don't know.In at least one instance (not this one) I reached different confident conclusions about the same case on different occasions (and this suggests that there are likely to be other cases which I have confidently resolved in ways other than the writer intended).The implication is that a system which performs at a level of success much above 90~ on the task of resolving referential it would be outperforming a human, which is contradictory: language means what humans take it to mean.( 2)In a number of cases where I judged the syntactic algorithm to give the wrong result, the premisses on which my own decisions were based were propositions that were not pieces of factual general knowledge and which I was not aware of ever having consciously entertained before producing them in the course of trying to interpret the text in question.It would therefore be quixotic to suggest that these propositions would occur in the data-base available to a future MT system.Consider, for instance:Under the "permissive" powers, however, in the worst cases when the Ministry was right and the M.P. was right the local authority could still dig its heels in and say that whatever the Ministry said it was not going to give a grant.(HI6.I feel sure that i_~t refers to the local authority rather than the Ministry, chiefly because it seems to me much more plausible that a lower-level branch of government would refuse to heed requests for action from a higher-level branch than that it would accuse the higher-level branch of deceit. But this generalization about the sociology of government was new to me when I thought it up for the purpose of interpreting the example quoted (and I am not certain that it is in fact Universally true).(3) In a number of cases it was very difficult to believe that introduction Of semantic considerations into the syntactic algorithm would not worsen its performance.Here, an example is:... and the Isle of Man. We do by these Presents for Us, our Heirs and Successors institute and create a new Medal and We do hereby direct that i__~t shall be governed by the following rules and ordinances ... (H24.16)Hobbs's syntactic algorithm refers it to Medal, I believe rightly.Yet before reading the text I was under the impression that medals, like other small concrete inanimate objects, could not be governed; while territories like the Isle of Man can be, and indeed are.Syntax is more important than semantics in this case.(4) There are several syntactic phenomena (e.g. parallelism of structure between successive clauses) which turned out to be relevant to pronoun resolution but which are ignored by Hobbs's algorithm.I have not undertaken the task of modifying the syntactic algorithm in order to exploit these phenomena, but it seems likely that the already-good performance of the algorithm could be further improved.It is also worth pointing out that accepting the legitimacy of probabilistic methods allows one to exploit many crude (and therefore cheaply-exploited) semantic considerations, such as Katz/ Fodor selection restrictions, which have to be left out of a deterministic system because in practice they are sometimes violated.As we have seen, Hobbs suggested that only a small percentage improvement in the performance of his pure syntactic algorithm could be achieved by adding semantic selection restrictions.Rules such as "the verb 'fear' must have an [+animate] subject" almost never prove to be exceptionless in real-life usage: even genres of text that appear soberly literal contain many cases of figurative or extended usage. This is one reason why advocates of a "semantic" approach to artificial language-processing believe in using relatively elaborate methods involving complex inferential chains --though they give us little reason to expect that these techniques too will not in practice be bedevilled by difficulties similar to those that occur with straightforward selection restrictions.However, while it may be that the subject of 'fear' is not always an animate noun, it may also be that this is true with much more than chance frequency.If so, an artificial language-processing system can and should use this as one factor to be balanced against others in resolving ambiguities in sentences containing 'fear'.To sum up: the deterministic-rationalist philosophical paradi~ has encouraged MT researchers to attempt an impossible task. The falliblerationalist paradigm requires them to lower their sights, but may at the same time allow them to attain greater actual success. Appendix:
null
null
null
null
{ "paperhash": [ "hobbs|pronoun_resolution", "leech|the_automatic_grammatical_tagging_of_the_lob_corpus" ], "title": [ "Pronoun resolution", "The Automatic Grammatical Tagging of the LOB Corpus" ], "abstract": [ "Two approaches to the problem of pronoun resolution are presented. The first is a naive algorithm that works by traversing the surface parse trees of the sentences of the text in a particular order looking for noun phrases of the correct gender and number. The algorithm is shown to incorporate many, though not all, of the constraints on co-referentiality between a nonreflective pronoun and a possible antecedent, which have been discovered recently by linguists. The algorithm clearly does not work in all cases, but the results of an examination of several hundred examples from published texts show that it performs remarkably well.In the second approach, it is shown how pronoun resolution is handled in a comprehensive system for semantic analysis of English texts. The system consists of four basic semantic operations which work by accessing a data base of 'World knowledge\" inferences, which are drawn selectively and in a context-dependent way in response to the operations. The first two operations seek to satisfy the demands made by predicates on the nature of their arguments and to discover the relations between sentences. The third operation - knitting - recognizes and merges redundant expressions. These three operations frequently result in a pronoun reference being resolved as a by-product. The fourth operation seeks to resolve those pronouns not resolved by the first three. It involves a bidirectional search of the text and 'World knowledge\" for an appropriate chain of inference and utilizes the efficiency of the naive algorithm.Four examples, including the classic examples of Winograd and Charniak, are presented that demonstrate pronoun resolution within the semantic approach.", "In collaboration with the English Department, University of Oslo, and the Nowegian Computing Centre for the Humanities, Bergen we have been engaged in the automatic grammatical tagging of the LOB (LancasterOslo/Bergen) Corpus of British English. The computer programs for this task are running at a success rate of approximately 96.7% and a substantial part of the 1,000,000-word corpus has already been tagged. The purpose of this paper is to give an account of the project, with special reference to the methods of tagging we have adopted." ], "authors": [ { "name": [ "Jerry R. Hobbs" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Leech", "R. Garside", "E. Atwell" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "268074020", "54146896" ], "intents": [ [ "methodology" ], [ "methodology" ] ], "isInfluential": [ false, false ] }
Problem: The paper discusses the influence of changing trends in the philosophy of language and mind on Machine Translation (MT) research, highlighting the need for MT researchers to redefine their goals and adopt neglected techniques in light of the shift from deterministic rationalism to fallibilist rationalism. Solution: The hypothesis posited is that the shift from a deterministic to a fallibilist version of rationalism will have implications for MT and Artificial Intelligence (AI) research, leading to the recognition that perfect human-like translation is unattainable, the necessity of probabilistic techniques for better approximations in translation, and the understanding that the improvement of translation quality is an ongoing, open-ended process.
497
0.006036
null
null
null
null
null
null
null
null
8eed9e2cff46685f88e93b78dd71af87e101eca9
17454434
null
Natural Language Information Retrieval System Dialog
Presented paper contains a description of an experimental version of the natural language information retrieval system DIALOG. The system is destined for the use in the field of medicine. Its main purpose is to ensure access to information to phlsiclans in a conversational manner. The use of the system does not require ability of programming from its user.
{ "name": [ "Bole, L. and", "Kochut, K. and", "Lesniewski, A. and", "Strzalkowski, T." ], "affiliation": [ null, null, null, null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
31
1
null
The paper presents the state of elaboration of the natural language information retrieval system DIALOG. Its aim is an automatic, conversational extraction of facts from a given text. Actually it is real medical text on gastroenterology, which was prepared by a team of specialists. The system has a modular structure.The first, and in fact very important module is the language analysis module. Its task is to ensure the transition of a medical text from its natural form, i.e. rentences formed by physicians, into a formal ~ogical notation. This logical notation, i.e. logical formulae, is rather universal and can be easy adapted to various deductive and knowledge representation methods. The program of the analyser was written with the use of the CATN /Cascaded ATN/ technlque, where the syntactic and semantic components constitute separate cascades.In the deduction and knowledge representation module the weak second order language was used. The works by E.Konrad /Konrad 76/ and N. Klein Presented version of the system was implemented on the IBM 370 computer /~.I 370 operating system/.Transformation of natural language sentences into logical formulaeThe user of the DIALOG system introducing his utterance into the system comes into direct contact with the natural language analysis module. This module plays the key role in the machine natural lang1~age communication process. Similarly as in many other information systems of this type, e.g. L[fMAR /Woods 72/, PLANES /Yaltz 76/, SO~FIF, /~urton 76/, RENDEZ-VOUS /Codd 78/, PLIDIS /Berry-Rogghe 78/, OIALOGIC /Grosz eta]. 82/, the purpose of the module is to transform a text in the natural language into a chosen formal representation. Suc~ Such a representationmust meet a number of requirements. Firstly, it must be "intelligible" to the internal parts of the system, i.e. the deductive comoonent and/or managing the data base. Secondly, it must carry in a formal, and clear maner the sense and meaning of utterances in natural language. Finally, the representation should allow for a reproduction of the original input sentence with the aim of generating intermediate paraphrases and/or answers for the user.In the parser of the DIALOC, system, we attempted on the gratest, in our opinion, achievements in the field of natural language processing. The following works had the greatest influence on the final form of the module: /Berry--Rogghe 78/, /Bates 78/, /Carbonell 81/, /Cercone 80/, /Chomsky 65/, /Ferrari 80/, /Fillmore 68/, /Gershman 79/, /Grosz 82/, /Lnndsbergen 81/, /Marcus 80/, /Martin 81/, /Moore 81/, /Robinson 82/, /Rosenschein 82/, /Schank 78/, /Steinacker 82/, /Waltz 78/, /Wi]ensky 80/, /Woods 72/ and /Woods 80/. We have transferred, with greater or less success, the most valuable achievements presented in these works, pertaining mainly to the English language processing, into our system, using them in the treatment of the Polish language. We attempted thus, to preserve a certain distance with regard to the language itself, as well as the subject of conversation with the computer, so that the adapted solutions were of a broader character and through that became comparable with the state of research in that field in other countries.The purpose of the language analysis module in the DIALOG system is transformation of the user's utterance /in Polish/ into the I order logic formulae. Other formal notations such as II order logic formulae, FUZZY formulae, Minsky frames and even the introduction of intensional logic elements are also considered. At present, ~e will concentrate on the process of transforming a natural sentence into a I order logic formula.The system is equipped with two independent modules: deduction and data base management. The data for these modules are the formulae generated by the parser. We will present only one module working on the basis of the we~( second order logic.The parsing system consists of the two closely cooperating parts: a syntactic analyser and a sem(nntlc interpreter. The whole was programmed with the aid o~ a mechanism called CATN /Cascaded ATN/ /Woods 80/, /Bolc, Strzalkowskl 82a,82b/ /Kochut 83/, where the syntactic component plays the role of the "upper", i.e. the dominating "cascade". For the syntactic analyser produces a structure of the sentence grammatical analysis, which in turn undergoes a semantical verification. In case, where the semantic interpreter is not able to give the meaning of the sentence, the syntactic component is activated again with the aim of presenting another grammatical analysis. If such an analysis cannot be found, the input sentence is treated as incorrect.
null
null
The syntactic component of the parser produces a gra~natical analysis of the input sentence in Polish. This was possible due to a skillful programming of rules governing the morphology and syntax of the language. Although, the whole system was oriented towards a defined type of texts /medical/, the accepted solutions make it a much more universal tool. We do not claim that the syntactic analyser in its present fol-m is able to solve all or the majority of problems of the Polish language syntax. It includes, however, rather wide subset of the colloquial language, enriched by constructions characteristic for medical texts.A natural language sentence introduced into the parser undergoes firstly a pretreatment in a so called spelling correcter. If all the words used in the sentence are listed in the system vocabulary then the sentence is passed for syntactic analysis. Otherwise the system attempts to state whether the speaker made a spelling error, giving him a chance to correct the error and even suggesting the proper word, or whether 11e used a word unknown to the system. In the last case, the user has a possibility of introducing the questioned word into the vocabulary but in practice it may turn out to be too troublesome for him. Usually then, the user is given a chance of withdrawing the unfortunate utterance or formulating it in a different way.The proper syntactic analysis begins at the moment of activating the first "cascade" of the parser. It consists of five ATN nets, with the aid of which the grammar of the subset of the Polish language has been written. The two largest nets SENTENCE /sentences/ and N0[~-P_RR /nominal groups/ play a superiorrole in relation to others: ADH-PT~A /adiective groups/~ ADV-PT~A /adverb groups2 and Q-EXPR /question phrases/. The process of syntactic analysis is usually quite complex and uses essentially the non-deterministic character of orocessing in ATN. It Is justified by the-specific nature of the Polish language, which is characgerised by a developed in~ection and a Sentence free word order.The result of the syntactic analysis is a grammatical analysis of the input sentence in the form of a so called o-form. It is a nonflexional form of a sentence, ordered according to a fixed key. The construction of the o-form can be expressed ba the structure: The stick mark "|,, is usually used as a symbol of the meta-language. Here it is used as a symbol of the defined language. Symbols S and END comnrise a single clause. A clause expresses every elementary activity or event expressed in the input sentence. Often, the o-form has a richer structure than a classical analysis tree. The elements of the o-form called ~subject~ , (direct ob-Ject~ , (indirect objectS, and ~adJective phrase) can also be expressed or modified with the use of clauses. The stick marks "I" separate the parts of the o-form and are its constatnt elements. Then transformed nuestion is subjected to semantic interpretation.The syntactic analyser manages the vocabulary, where infle×ional forms of words are kept. The vocabulary definition specifies the syntactic categories, to which given words belong.It also describes forms of words with the aid of lexlcalparameters: case, number, person and gender. These parameters are of gret value in examining the grammatical construction of sentences.When the syntactic analysis is successfully completed the o-form of the input dentence is forwarded for the semantic interpretation. The syntactic "cascade" is suspended, i.e. removed from the operational field, leaving place for the semantic "cascade". The configuration of the removed "cascade" is remembered thus, in case of necessity of generating an alternative grammatical analysis.The semantic interpreter consists of the two main parts: a constant controlling part, working on the basis of a very general pattern adjustment, and compatible experts algorithms, where the knowledge of the system in the field of conversation has been coded. The process of interpretation is assisted by a special vocabulary of semantic rules and on additional vocabulary complementing the expert knowledge.The sentence in the o-form is forwarded directly to the controlling part of the interpreter, where such its parameters as time, negation, aspect .... are evaluated first. Then the central predicative element of the sentence "calls for" a proper semantic rule, which from then will guide the interpretation process. The rule has a form of ~ pattern-concept pair /Wilensky 80/ Gershman 79/, /Carbonell 81/, where ~he pattern reflects the scheme of an elementary event, wheras the concept indicates how its meaning should be expressed through formulae. The semantic rule is activated for the time of interpretation of a single clause. If the pat tern is adjusted to the cl~use, an atomic formula is generated, expressing the meaning of the clause. The meaning of the whole sentence is expressed as a logical combination of meanings of all the o-form clauses. The semantic rules bring different /on the surface/ descriptions of the same phenomenon into a common interpretation.The.general structure of formulae generated by the interpreter is expressed by an implication:"where ~ has been introduced from a semantic rule and~i come from the system knowledge -special compatible parts of the interpreter called the experts. Individual o-form phrases, in the context of the dialogue subject, are interpreted in experts.In our system, designed for conversation with a phlsician, we have experts for names of sicknesses /SICKNESS/, names of ~rgaus /ORGAN/, internal substances /oUBSTANCE/, therapies /TREAT-~NT/, medicaments /MEDICAmeNT/ and names of animate objects /ANIMATE/ and the remaining objects foreign to the body /PHYSOBJ/.Experts are activated on the request of a proper semantic rule. The controlling part of the inter~eter "instructs" the expert/s/ chosen by the pattern to interpret a notion or expression. The indicated expert can solve the problem on its o~m or seek for the help of other experts. Often, one complex expression has to be gualified by two or three exprrts.All the experts, as well as the controlling part of the interpreter /FOR~UJLA, CASES and QWORDS nets/ have been recoreded in ATN formalism and form a lower "cascade" of the parser.The interpreter is also egulpped with a mechanism of context pronominal reference solution.We will present two examples of transformation of medical sentences into I order logic formulae. Before that, a few words on the adopted convention of formula notation. The symbols IMPLSYM and KONJSYM are logical operators /implication/ andS/conjunction/ respectively. Integer placed directly after the symbol KONJSYN indicates the number of conjlmction factors. Names of predicates are preceded by symbols '~" 7hash mark/, and an integer placed right to the name defines the number of predi-cate arguments. The arguments specify their type /sort/, name of the variable and constant /if there is one/.Sentence :Alkehol powoduje r6wnie~ wzrost napi~cia mi~ni6wki dwunastnlcy. (STOKNESS x73)))).The deduction module is a separate part of the whole DIALOG system. Its maiz purpose is to collect and represent the knowledge gained by the system and also the ability to use the possessed information in accordance with the wishes of the user of the system.Our work on the achievement of the objectives indicated above was based on the experiences pre~ented by E.Konrad and N.Klein /Konrad 76/, /Klein 78/ from Technical University in West Berlin.In the previous chapter we presented how the text, written in Polish, is transformed into I order logic formulae. This, of course, implies the way of representation of the knowledge presented in the natural language.The information included in the logical formulae coming from the language module has to be stored for later use. The logical formulae are then introduced into the data base. The data base, adequately filled with the mentioned formulae, constitutes the knowledge represenlation carried through the natural language sentences. It is as equivalent to the text as the I order logic allows to convey the meaning of th~ natural language sentences.The date base consists of three separate parts: a nucleus, ~ amplifier and a filter /Konrad 76/. Each of the parts includes a different , from the concep-tional point of view, elements: A. The nucleus includes groud literals, which represent facts occuring in the field of knowledge represented in the base. E.g.the information that the pancreas is a secretory organ is presented as a literal (~ WYDZ-NARZAD (TRZUSTtfA)~From the system point of view there is no conceptional difference between the tee facts: the above one,and (ORGAN ([nRZUSTKA)) Thus the type /sort/ ORGAN may be regarded as a predicate and the above atomic formula as true one. B. The amplifier is a part representing the "fundamental" knowledge of the system. The formulae included in the amplifier can be devided into three categories: I/ dependent formulae /i/Vx~ ~s~..VXnCS~ A~x~,..A is here any formula and n a predicate. As we can see each variable, bound by the universal ~uantifier is of a specified sort. Recapitulating, the nucleus represents the extensional part of the knowledge represented in the data base. It is the fundamental knowledge which cannot be obtained from the amalysis of the presented text, and which is assential to proper deduction. The amplifier represents the intensional part of the data base. The knowledge represented there is a co31ection of statements used for deduction.Each of the logical formulae is kept in a certain internal form, corresponding to the way of deduction, described later on. As we have already mentioned, the majority of formulae is of the /i/ form. Every such formula is converted, at the moment of inserting into the data base, to a pair of the following form:(~conclusion~premises testing procedure)Because of the menner of storing the knowledge described in the point 3.1, the answer to the question presented to the system does not have to be represented explicite in the data base. The deduction module should be able to obtain all the information included in the data base.The questions presented to the system are also converted to the logical formulae. Thus, the extraction of knowledge is reduced to the verification of a given formula towards the present content of the data base.The logical formula representing the question is converted to an appropriate LISP form. Evaluation of such a form is equivalent to examination whether the represented by it formula is true. This form correspond to the normal form of the logical formula /LISP function AND, OR and NOT are used/. The literals are tested by a TESTE function according to the following algorithm: I. Check the amplifier, trying to find the rule with the conclusion unifiable with the literal under proof. If such a formula does not exist that there is no proof of a given literal; 2. If there is such a formula then: a. if it is indicated as an independent formula then STOP with a proof b. if it is indicated as a restrictive formula then STOP without a proof~ c. otherwise evaluate the form associated with the conclusion; if we obtain NIT, /false in LISP/ then search the amplifier for another rule and go to 2. If we obtain value different than NIL then STOP with a proof. Otherwise Stop without a nroof.It is therefore a so called backward deduction zystem. The nroof goes back from the formula -aim ~ to the facts, applying the formulae from the amplifier in the "Backward" direction.The answer can be YES or NO or it can be a list of constants depending on the kind of question.The I order logic has been enriched here with some elements of the II order language. Predicate variables, quantification of these variavles and retrieval of predicates as well as constants have been introduced.The system communicates with the data base through commands of the specially designed language. These commands enable introduction and erasing from the data base.The basic commands serving the purpose of knowledge extraction are TEST and FIND: a. TEST A -looking for the proof of a formula A. Answer YES/NO. b. FIND ~1""11'mX~xl"'xn) ~r~1"';x1" '~ ~i -predicate variables -retrieval of all the pairs: m-tuple predicates and n-tuple oe constants which satisfy a given formula A.The formula presented in the example I and a formula below have been introduced into the amlifier.Wzrost napi@cia mi~dni6wki d~mnastnicy mo~e by4 przyczyn~ OZT. Facts -ground literals -were introduced into the nucleus. E.g.(WV~DZ-NARZAD (DWUNASTNICA)), etc.After converting the formulae of theorem~ and question into the LISP form its evaluation Will find the answer to the question. The answer is of course YES.
The results obtained during the work on the system confirmed our direction of research. Our further work will concentrate on constant improvement of the existing modules. At the sere time we will undertake attempts of enriching the system with better deductive modules such as resolution in modal logic, default reasoning /Relter/, FUZZY and Minsky frames.The medical text was prepared by a team of physicians from the Postgraduate Education Center in Warsaw under the leadership of Prof. Dr J.Doroszewski. Prof. Doroszewskl and his associates have been giving us constant assistance in the interpretation of the medical knowledge included in the presented text. Due to their creative and active cooperation we were able to undertake the elaboration of the described system. We would like to express our cordial gratitude to Prof. Doroszewski and the whole team of doctors.
Main paper: the syntactic analyser: The syntactic component of the parser produces a gra~natical analysis of the input sentence in Polish. This was possible due to a skillful programming of rules governing the morphology and syntax of the language. Although, the whole system was oriented towards a defined type of texts /medical/, the accepted solutions make it a much more universal tool. We do not claim that the syntactic analyser in its present fol-m is able to solve all or the majority of problems of the Polish language syntax. It includes, however, rather wide subset of the colloquial language, enriched by constructions characteristic for medical texts.A natural language sentence introduced into the parser undergoes firstly a pretreatment in a so called spelling correcter. If all the words used in the sentence are listed in the system vocabulary then the sentence is passed for syntactic analysis. Otherwise the system attempts to state whether the speaker made a spelling error, giving him a chance to correct the error and even suggesting the proper word, or whether 11e used a word unknown to the system. In the last case, the user has a possibility of introducing the questioned word into the vocabulary but in practice it may turn out to be too troublesome for him. Usually then, the user is given a chance of withdrawing the unfortunate utterance or formulating it in a different way.The proper syntactic analysis begins at the moment of activating the first "cascade" of the parser. It consists of five ATN nets, with the aid of which the grammar of the subset of the Polish language has been written. The two largest nets SENTENCE /sentences/ and N0[~-P_RR /nominal groups/ play a superiorrole in relation to others: ADH-PT~A /adiective groups/~ ADV-PT~A /adverb groups2 and Q-EXPR /question phrases/. The process of syntactic analysis is usually quite complex and uses essentially the non-deterministic character of orocessing in ATN. It Is justified by the-specific nature of the Polish language, which is characgerised by a developed in~ection and a Sentence free word order.The result of the syntactic analysis is a grammatical analysis of the input sentence in the form of a so called o-form. It is a nonflexional form of a sentence, ordered according to a fixed key. The construction of the o-form can be expressed ba the structure: The stick mark "|,, is usually used as a symbol of the meta-language. Here it is used as a symbol of the defined language. Symbols S and END comnrise a single clause. A clause expresses every elementary activity or event expressed in the input sentence. Often, the o-form has a richer structure than a classical analysis tree. The elements of the o-form called ~subject~ , (direct ob-Ject~ , (indirect objectS, and ~adJective phrase) can also be expressed or modified with the use of clauses. The stick marks "I" separate the parts of the o-form and are its constatnt elements. Then transformed nuestion is subjected to semantic interpretation.The syntactic analyser manages the vocabulary, where infle×ional forms of words are kept. The vocabulary definition specifies the syntactic categories, to which given words belong.It also describes forms of words with the aid of lexlcalparameters: case, number, person and gender. These parameters are of gret value in examining the grammatical construction of sentences.When the syntactic analysis is successfully completed the o-form of the input dentence is forwarded for the semantic interpretation. The syntactic "cascade" is suspended, i.e. removed from the operational field, leaving place for the semantic "cascade". The configuration of the removed "cascade" is remembered thus, in case of necessity of generating an alternative grammatical analysis.The semantic interpreter consists of the two main parts: a constant controlling part, working on the basis of a very general pattern adjustment, and compatible experts algorithms, where the knowledge of the system in the field of conversation has been coded. The process of interpretation is assisted by a special vocabulary of semantic rules and on additional vocabulary complementing the expert knowledge.The sentence in the o-form is forwarded directly to the controlling part of the interpreter, where such its parameters as time, negation, aspect .... are evaluated first. Then the central predicative element of the sentence "calls for" a proper semantic rule, which from then will guide the interpretation process. The rule has a form of ~ pattern-concept pair /Wilensky 80/ Gershman 79/, /Carbonell 81/, where ~he pattern reflects the scheme of an elementary event, wheras the concept indicates how its meaning should be expressed through formulae. The semantic rule is activated for the time of interpretation of a single clause. If the pat tern is adjusted to the cl~use, an atomic formula is generated, expressing the meaning of the clause. The meaning of the whole sentence is expressed as a logical combination of meanings of all the o-form clauses. The semantic rules bring different /on the surface/ descriptions of the same phenomenon into a common interpretation.The.general structure of formulae generated by the interpreter is expressed by an implication:"where ~ has been introduced from a semantic rule and~i come from the system knowledge -special compatible parts of the interpreter called the experts. Individual o-form phrases, in the context of the dialogue subject, are interpreted in experts.In our system, designed for conversation with a phlsician, we have experts for names of sicknesses /SICKNESS/, names of ~rgaus /ORGAN/, internal substances /oUBSTANCE/, therapies /TREAT-~NT/, medicaments /MEDICAmeNT/ and names of animate objects /ANIMATE/ and the remaining objects foreign to the body /PHYSOBJ/.Experts are activated on the request of a proper semantic rule. The controlling part of the inter~eter "instructs" the expert/s/ chosen by the pattern to interpret a notion or expression. The indicated expert can solve the problem on its o~m or seek for the help of other experts. Often, one complex expression has to be gualified by two or three exprrts.All the experts, as well as the controlling part of the interpreter /FOR~UJLA, CASES and QWORDS nets/ have been recoreded in ATN formalism and form a lower "cascade" of the parser.The interpreter is also egulpped with a mechanism of context pronominal reference solution.We will present two examples of transformation of medical sentences into I order logic formulae. Before that, a few words on the adopted convention of formula notation. The symbols IMPLSYM and KONJSYM are logical operators /implication/ andS/conjunction/ respectively. Integer placed directly after the symbol KONJSYN indicates the number of conjlmction factors. Names of predicates are preceded by symbols '~" 7hash mark/, and an integer placed right to the name defines the number of predi-cate arguments. The arguments specify their type /sort/, name of the variable and constant /if there is one/.Sentence :Alkehol powoduje r6wnie~ wzrost napi~cia mi~ni6wki dwunastnlcy. (STOKNESS x73)))).The deduction module is a separate part of the whole DIALOG system. Its maiz purpose is to collect and represent the knowledge gained by the system and also the ability to use the possessed information in accordance with the wishes of the user of the system.Our work on the achievement of the objectives indicated above was based on the experiences pre~ented by E.Konrad and N.Klein /Konrad 76/, /Klein 78/ from Technical University in West Berlin.In the previous chapter we presented how the text, written in Polish, is transformed into I order logic formulae. This, of course, implies the way of representation of the knowledge presented in the natural language. knowledge representation: The information included in the logical formulae coming from the language module has to be stored for later use. The logical formulae are then introduced into the data base. The data base, adequately filled with the mentioned formulae, constitutes the knowledge represenlation carried through the natural language sentences. It is as equivalent to the text as the I order logic allows to convey the meaning of th~ natural language sentences.The date base consists of three separate parts: a nucleus, ~ amplifier and a filter /Konrad 76/. Each of the parts includes a different , from the concep-tional point of view, elements: A. The nucleus includes groud literals, which represent facts occuring in the field of knowledge represented in the base. E.g.the information that the pancreas is a secretory organ is presented as a literal (~ WYDZ-NARZAD (TRZUSTtfA)~From the system point of view there is no conceptional difference between the tee facts: the above one,and (ORGAN ([nRZUSTKA)) Thus the type /sort/ ORGAN may be regarded as a predicate and the above atomic formula as true one. B. The amplifier is a part representing the "fundamental" knowledge of the system. The formulae included in the amplifier can be devided into three categories: I/ dependent formulae /i/Vx~ ~s~..VXnCS~ A~x~,..A is here any formula and n a predicate. As we can see each variable, bound by the universal ~uantifier is of a specified sort. Recapitulating, the nucleus represents the extensional part of the knowledge represented in the data base. It is the fundamental knowledge which cannot be obtained from the amalysis of the presented text, and which is assential to proper deduction. The amplifier represents the intensional part of the data base. The knowledge represented there is a co31ection of statements used for deduction.Each of the logical formulae is kept in a certain internal form, corresponding to the way of deduction, described later on. As we have already mentioned, the majority of formulae is of the /i/ form. Every such formula is converted, at the moment of inserting into the data base, to a pair of the following form:(~conclusion~premises testing procedure)Because of the menner of storing the knowledge described in the point 3.1, the answer to the question presented to the system does not have to be represented explicite in the data base. The deduction module should be able to obtain all the information included in the data base.The questions presented to the system are also converted to the logical formulae. Thus, the extraction of knowledge is reduced to the verification of a given formula towards the present content of the data base.The logical formula representing the question is converted to an appropriate LISP form. Evaluation of such a form is equivalent to examination whether the represented by it formula is true. This form correspond to the normal form of the logical formula /LISP function AND, OR and NOT are used/. The literals are tested by a TESTE function according to the following algorithm: I. Check the amplifier, trying to find the rule with the conclusion unifiable with the literal under proof. If such a formula does not exist that there is no proof of a given literal; 2. If there is such a formula then: a. if it is indicated as an independent formula then STOP with a proof b. if it is indicated as a restrictive formula then STOP without a proof~ c. otherwise evaluate the form associated with the conclusion; if we obtain NIT, /false in LISP/ then search the amplifier for another rule and go to 2. If we obtain value different than NIL then STOP with a proof. Otherwise Stop without a nroof.It is therefore a so called backward deduction zystem. The nroof goes back from the formula -aim ~ to the facts, applying the formulae from the amplifier in the "Backward" direction.The answer can be YES or NO or it can be a list of constants depending on the kind of question.The I order logic has been enriched here with some elements of the II order language. Predicate variables, quantification of these variavles and retrieval of predicates as well as constants have been introduced.The system communicates with the data base through commands of the specially designed language. These commands enable introduction and erasing from the data base.The basic commands serving the purpose of knowledge extraction are TEST and FIND: a. TEST A -looking for the proof of a formula A. Answer YES/NO. b. FIND ~1""11'mX~xl"'xn) ~r~1"';x1" '~ ~i -predicate variables -retrieval of all the pairs: m-tuple predicates and n-tuple oe constants which satisfy a given formula A.The formula presented in the example I and a formula below have been introduced into the amlifier.Wzrost napi@cia mi~dni6wki d~mnastnicy mo~e by4 przyczyn~ OZT. Facts -ground literals -were introduced into the nucleus. E.g.(WV~DZ-NARZAD (DWUNASTNICA)), etc.After converting the formulae of theorem~ and question into the LISP form its evaluation Will find the answer to the question. The answer is of course YES. conclusion: The results obtained during the work on the system confirmed our direction of research. Our further work will concentrate on constant improvement of the existing modules. At the sere time we will undertake attempts of enriching the system with better deductive modules such as resolution in modal logic, default reasoning /Relter/, FUZZY and Minsky frames. introduction: The paper presents the state of elaboration of the natural language information retrieval system DIALOG. Its aim is an automatic, conversational extraction of facts from a given text. Actually it is real medical text on gastroenterology, which was prepared by a team of specialists. The system has a modular structure.The first, and in fact very important module is the language analysis module. Its task is to ensure the transition of a medical text from its natural form, i.e. rentences formed by physicians, into a formal ~ogical notation. This logical notation, i.e. logical formulae, is rather universal and can be easy adapted to various deductive and knowledge representation methods. The program of the analyser was written with the use of the CATN /Cascaded ATN/ technlque, where the syntactic and semantic components constitute separate cascades.In the deduction and knowledge representation module the weak second order language was used. The works by E.Konrad /Konrad 76/ and N. Klein Presented version of the system was implemented on the IBM 370 computer /~.I 370 operating system/.Transformation of natural language sentences into logical formulaeThe user of the DIALOG system introducing his utterance into the system comes into direct contact with the natural language analysis module. This module plays the key role in the machine natural lang1~age communication process. Similarly as in many other information systems of this type, e.g. L[fMAR /Woods 72/, PLANES /Yaltz 76/, SO~FIF, /~urton 76/, RENDEZ-VOUS /Codd 78/, PLIDIS /Berry-Rogghe 78/, OIALOGIC /Grosz eta]. 82/, the purpose of the module is to transform a text in the natural language into a chosen formal representation. Suc~ Such a representationmust meet a number of requirements. Firstly, it must be "intelligible" to the internal parts of the system, i.e. the deductive comoonent and/or managing the data base. Secondly, it must carry in a formal, and clear maner the sense and meaning of utterances in natural language. Finally, the representation should allow for a reproduction of the original input sentence with the aim of generating intermediate paraphrases and/or answers for the user.In the parser of the DIALOC, system, we attempted on the gratest, in our opinion, achievements in the field of natural language processing. The following works had the greatest influence on the final form of the module: /Berry--Rogghe 78/, /Bates 78/, /Carbonell 81/, /Cercone 80/, /Chomsky 65/, /Ferrari 80/, /Fillmore 68/, /Gershman 79/, /Grosz 82/, /Lnndsbergen 81/, /Marcus 80/, /Martin 81/, /Moore 81/, /Robinson 82/, /Rosenschein 82/, /Schank 78/, /Steinacker 82/, /Waltz 78/, /Wi]ensky 80/, /Woods 72/ and /Woods 80/. We have transferred, with greater or less success, the most valuable achievements presented in these works, pertaining mainly to the English language processing, into our system, using them in the treatment of the Polish language. We attempted thus, to preserve a certain distance with regard to the language itself, as well as the subject of conversation with the computer, so that the adapted solutions were of a broader character and through that became comparable with the state of research in that field in other countries.The purpose of the language analysis module in the DIALOG system is transformation of the user's utterance /in Polish/ into the I order logic formulae. Other formal notations such as II order logic formulae, FUZZY formulae, Minsky frames and even the introduction of intensional logic elements are also considered. At present, ~e will concentrate on the process of transforming a natural sentence into a I order logic formula.The system is equipped with two independent modules: deduction and data base management. The data for these modules are the formulae generated by the parser. We will present only one module working on the basis of the we~( second order logic.The parsing system consists of the two closely cooperating parts: a syntactic analyser and a sem(nntlc interpreter. The whole was programmed with the aid o~ a mechanism called CATN /Cascaded ATN/ /Woods 80/, /Bolc, Strzalkowskl 82a,82b/ /Kochut 83/, where the syntactic component plays the role of the "upper", i.e. the dominating "cascade". For the syntactic analyser produces a structure of the sentence grammatical analysis, which in turn undergoes a semantical verification. In case, where the semantic interpreter is not able to give the meaning of the sentence, the syntactic component is activated again with the aim of presenting another grammatical analysis. If such an analysis cannot be found, the input sentence is treated as incorrect. Appendix: The medical text was prepared by a team of physicians from the Postgraduate Education Center in Warsaw under the leadership of Prof. Dr J.Doroszewski. Prof. Doroszewskl and his associates have been giving us constant assistance in the interpretation of the medical knowledge included in the presented text. Due to their creative and active cooperation we were able to undertake the elaboration of the described system. We would like to express our cordial gratitude to Prof. Doroszewski and the whole team of doctors.
null
null
null
null
{ "paperhash": [ "bolc|design_of_interpreters,_compilers,_and_editors,_for_augmented_transition_networks", "grosz|dialogic:_a_core_natural-language_processing_system", "bolc|transformation_of_natural_language_into_logical_formulas", "rosenschein|translating_english_into_logical_form", "dahl|translating_spanish_into_logic_through_logic", "moore|problems_in_logical_form", "marcus|a_theory_of_syntactic_recognition_for_natural_language", "waltz|the_planes_system:_natural_language_access_to_a_large_data_base.", "petrick|on_natural_language_based_computer_systems", "woods|cascaded_atn_grammars", "gershman|knowledge-based_parsing." ], "title": [ "Design of Interpreters, Compilers, and Editors, for Augmented Transition Networks", "DIALOGIC: A Core Natural-Language Processing System", "Transformation of Natural Language Into Logical Formulas", "Translating English Into Logical Form", "Translating Spanish Into Logic Through Logic", "Problems in Logical Form", "A theory of syntactic recognition for natural language", "The PLANES System: Natural Language Access To a Large Data Base.", "On Natural Language Based Computer Systems", "Cascaded ATN Grammars", "Knowledge-based parsing." ], "abstract": [ "The Planes Interpreter and Compiler for Augmented Transition Network Grammars.- An ATN Programming Environment.- Compiling Augmented Transition Networks into MacLisp.- Towards the Elastic ATN Implementation.", "The DIALOGIC system translates English sentences into representations of their literal meaning in the context of an utterance. These representations, or \"logical forms,\" are intended to be a purely formal language that is as close as possible to the structure of natural language, while providing the semantic compositionality necessary for meaning-dependent computational processing. The design of DIALOGIC (and of its constituent modules) was influenced by the goal of using it as the core language-processing component in a variety of systems, some of which are transportable to new domains of application.", "This paper presents an attempt of elaboration of a full parsing system for Polish natural language which is being worked out in the Institute of Informatics of Warsaw University. Our system was adapted to the parsing of the corpus of real medical texts which concern a subdomain of medicine. We made use of the experience of such famous authors as (6), (7), (8), (9), (10), (11), (12), (13), (14).", "A scheme for syntax-directed translation that mirrors compositional model-theoretic semantics is discussed. The scheme is the basis for an English translation system called PATR and was used to specify a semantically interesting fragment of English, including such constructs as tense, aspect, modals, and various lexically controlled verb complement structures. PATR was embedded in a question-answering system that replied appropriately to questions requiring the computation of logical entailments.", "We discuss the use of logic for natural language (NL) processing, both as an internal query language and as a programming tool. Some extensions of standard predicate calculus are motivated by the first of these roles. A logical system including these extensions is informally described. It incorporates semantic as well as syntactic NL features, and its semantics in a given interpretation (or data base) determines the answer-extraction process. We also present a logic-programmed analyser that translates Spanish into this system. It equates semantic agreement with syntactic weil-formedness, and can detect certain presuppositions, resolve certain ambiguities and reflect relations among sets.", "Abstract : Most current theories of natural-language processing propose that the assimilation of an utterance involves producing an expression or structure that in some sense represents the literal meaning of the utterance. It is often maintained that understanding what an utterance literally means consists in being able to recover such a representation. In philosophy and linguistics this sort of representation is usually said to display the \"logical form\" of an utterance. This paper surveys some of the key problems that arise in defining a system of representation for the logical forms of English sentences and suggests possible approaches to their solution. The author first looks at some general issues relating to the notion of logical form, explaining why it makes sense to define such a notion only for sentences in context, not in isolation, and then discusses the relationship between research on logical form and work on knowledge representation in artificial intelligence. The rest of the paper is devoted to examining specific problems in logical form. These include the following: quantifiers; events, actions and processes; time and space; collective entities and substances; propositional attitudes and modalities; and questions and imperatives.", "Abstract : Assume that the syntax of natural language can be parsed by a left-to-right deterministic mechanism without facilities for parallelism or backup. It will be shown that this 'determinism' hypothesis, explored within the context of the grammar of English, leads to a simple mechanism, a grammar interpreter. (Author)", "Abstract : The PLANES system is designed to allow non-programmers to obtain information from a large relational data base by typing requests in English. PLANES can deal with pronoun reference, ellipsis and questions about itself. Examples of system operation and detailed program descriptions are included, along with discussions on the answering of vague or complex questions, browsing, the generation of clarifying dialogues with the user, adding the ability to handle new questions, and the organization and content of the data base.", "Some of the arguments that have been given both for and against the use of natural languages in question-answering and programming systems are discussed. Several natural language based computer systems are considered in assessing the current level of system development. Finally, certain pervasive difficulties that have arisen in developing natural language based systems are identified, and the approach taken to overcome them in the REQUEST (Restricted English QUESTion-Answering) System is described.", "A generalization of the notion of ATN grammar, called a cascaded ATN (CATN), is presented. CATN's permit a decomposition of complex language understanding behavior into a sequence of cooperating ATN's with separate domains of responsibility, where each stage (called an ATN transducer) takes its input from the output of the previous stage. The paper includes an extensive discussion of the principle of factoring -- conceptual factoring reduces the number of places that a given fact needs to be represented in a grammar, and hypothesis factoring reduces the number of distinct hypotheses that have to be considered during parsing.", "Abstract : A model for knowledge-based natural language analysis is described. The model is applied to parsing English into Conceptual Dependency representations. The model processes sentences from left to right, one word at a time, using linguistic and non-linguistic knowledge to find the meaning of the input. It operates in three modes: structure-driven, position-driven, and situation-driven. The first two modes are expectation-based. In structure driven mode concepts underlying new input are expected to fill slots in the previously built conceptual structures. Noun groups are handled in position-driven mode which uses position-based pooling of expectations. When the first two modes fail to account for a new input, the parser goes into the third, situation-driven mode which tries to handle a situation by applying a series of appropriate experts. Four general kinds of knowledge are identified as necessary for language understanding: lexical knowledge, world knowledge, linguistic knowledge, and contextual knowledge." ], "authors": [ { "name": [ "L. Bolc" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "B. Grosz", "Norman Haas", "G. Hendrix", "Jerry R. Hobbs", "P. Martin", "Robert C. Moore", "Jane J. Robinson", "S. Rosenschein" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "L. Bolc", "T. Strzalkowski" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Rosenschein", "Stuart M. Shieber" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "V. Dahl" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Robert C. Moore" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Mitchell P. Marcus" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Waltz", "Timothy W. Finin", "Fred Green", "F. Conrad", "Bradley A. Goodman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Stanley R. Petrick" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. Woods" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "A. Gershman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "56581466", "11289202", "2369169", "9564084", "9013266", "18655604", "6616065", "54128430", "13293588", "6169596", "60724649" ], "intents": [ [], [], [], [], [], [], [], [], [], [ "methodology" ], [ "background" ] ], "isInfluential": [ false, false, false, false, false, false, false, false, false, false, false ] }
Problem: The paper aims to describe an experimental version of the natural language information retrieval system DIALOG, specifically designed for use in the field of medicine, to provide physicians with access to information in a conversational manner without requiring programming skills. Solution: The system proposes an automatic, conversational extraction of facts from medical texts through a modular structure, with a key module being the language analysis module that transforms medical text into logical notation, facilitating deductive and knowledge representation methods.
497
0.002012
null
null
null
null
null
null
null
null
d97ef8a04bf3bcf1f996953edbc16209ab52624d
7526396
null
Dealing With Conjunctions in a Machine Translation Environment
A set of rules, named CSDC (Conjunct Scope Determination Constraints), is suggested for attacking the conjunct scope problem, the major issue in the automatic processing of conjunctions which has been raising great difficulty for natural language processing systems. Grammars embodying the CSDC are incorporated into an existing A~{ parser, and are tested successfully against a wide group of "and" conjunctive sentences, which are of three types, namely clausal coordination, phrasal coordination, and gapping. With phrasal coordination the structure with two NPs coordinated by "and" has been given most attention. It is hoped that an ATN parser capable of dealing with a large variety of conjunctions in an efficient way will finally emerge from the present work.
{ "name": [ "Huang, Xiuming" ], "affiliation": [ null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
25
26
null
One of the most complicated phenomena in English is conjunction constructions. Even quite simple noun phrases like (i) Cats with whiskers and tails are structurally ambiguous and would cause problem when translated from English to, sa~-, Chinese. Since in Chinese all the modifiers of the noun should go before it, two different translations in Chinese might be got from the above phrase:(la) (With whiskers and tails) de (cats) ("de" is a particle which connects the modifiers and the modifieds);(ib) ((With whiskers) de (cats)) and (tails).Needless to say, a machine translation system should be able to analyse correctly among ether things the conjunction constructions before high quality translation can be achieved.As is well known, ATN (Augmented Transition Network) grammars are powerful in natural language parsing and have been widely applied in various NL processing systems. However, the standard ATN grsamars are rather weak in dealing with conjunctions.In (Woods 73 ), a special facility SYSCONJ for processing conjunctions was designed and implemented in the LUNAR speech question-answering system. It is capable of analysing reduced conjunctions impressively (eg, "John drove his car through and completely demolished a plate glass window"), but it has two drawbacks: first, for the processing of general types of conjunction constructions, it is too costly and too inefficient; secondly, the method itself is highly non-deterministic and easily results in combinatorial explosions.In (Blackwell 81 ), a WRD AND arc was proposed. The arc would take the interpreter from the final to the initial state of a computation, then analyse the second argument of a coordinated construction on a second pass through the ATN network. With this method she can deal with some rather complicated conjunction constructions, but in fact a WRD AND arc could have been added to nearly every state of the network, thus making the grammar extremely bulky. Furthermore, her syste~ lacks the power for resolving the ambiguities contained in structures like (1).In the machine translation system designed by (Nagao et al 82) , when dealing with conjunctions, only the nearest two items of the same parts of speech were processed, while the following types of coordinated conjunctions were not analysed correctly:(noun + prep + noun) + and + (noun + prep + noun); (adj + noun) + and + noun. (Boguraev in press) suggested that a demon should be created which would be woken up when "and" is encountered. The demon will suspend the normal processing, inspect the current context (the local registers which hold constituents recognised at this level) and recent history, and use the information thus gained to construct a new ATN arc dynamically which seeks to recognise a constituent categorially similar to the one just completed or being currently processed. Obviously the demon is based on expectations, but what follows the "and" is extremely uncertain so that it would be very difficult for the demon to reach a high efficiency. A kind of "data-driven" alter-native which may reduce the non-determinism is to try to decide the scope of the left conjunct retrospectively by recognising first the type of the right conjunct, rather than to predict the latter by knowing the category of the constituent to the left of the coordinator which is "just completed or being currently processed" --an obscure or even misleading specification. the ball.. The man kicked the child and threw the ball.Exlh. The man kicked and threw the ball.ExlS. The man kicked and the woman threw the ball.CASSEX (Chinese Academy of Social Sciences;University of Essex) is an ATN parser based on part of the programs developed by Boguraev (1979) which was designed for the automatic resolution of linguistic ambiguities. Conjunctions, one major source of linguistic ambiguities, however, were not taken into consideration there because, as the author put it himself, "they were felt to be too large a problem to be tackled along with all the others" (Boguraev 79, 1.6).A new set of grammars has been written, and a lot of modifications has been made to the grammar interpreter, so that conjunctions could be dealt with within the ATN framework.The following are the example sentences rectly parsed by the package:Exl. The man with the telescope and the brella kicked the ball.Ex2.Ex~.Ex6.ExT.Ex8.ExlO.ExlI.ExI2.The man with the telescope and the umbrella with a handle kicked the ball.The man with the telescope and the woman kicked the ball.The man with the telescope and the woman with the umbrella kicked the ball.The man with the child and the woman kicked the ball.The man with the child and the woman with the umbrella kicked the ball.The man with the child and the woman is kicking the ball.The man with the child and the woman are kicking the ball.The man with the child and the umbrella fell.The man kicked the ball and the child threw the ball.The man kicked the ball and the child.The man kicked the child and the woman III ELEMENTARY NP AND EXPANDED NP The term 'elementary NP' is used to indicate a noun phrase which can be embedded in but has no other noun phrases embedded in it. A noun phrase which contains other, embedded, NPs is called 'expanded Np,.Thus, when analysing the sentence fr84~ment "the man with the telescope and the woman with the umbrella", we will have four elementary NPs ("the man", "the telescope", "the woman" and "the umbrella") and two expanded NPs ("the man with the telescope" and "the woman with the umbrella"). We may well have a third kind of NP, the coordinated NP with conjunction in it, but it is the result of, rather than the material for, conjunction processing, and therefore will not receive particular attention. In the text followed we will use 'EL-NP' and 'EXP-NP' to represent the two types of noun phrases, respectively.LEFT-PART will stand for the whole fragment to the left of the coordinator;andRIGHT-PART for the fragment to the right of it. LEFT-WORD and RIGHT-WORD will indicate the word immediately precedes and follows, respectively, the coordinator. The conjunct to the right of the coordinator will be called RIGHT-PHRASE.Constraints for determining the grammaticalness of constructions involving coordinating conjunctions have been suggested by linguists, among which are (Ross 67)'s CSC (Coordinate Structure Constraint), (Schachter 77)'s CCC (Coordinate Constituent Constraint), (Williams 78)'s Across-the-Board (ATB) Convention, and (Gazdar @l)'s nontransformational treatment of coordinate structures using the conception of 'derived categories'. These constraints are useful in the investigation of coordination phenomena,but in order to process coordinating structures automatically, some constraint defined from the procedural point of view is still required.The following ordered rules, named CSDC (Conjuncts Scope Determination Constraints), are suggested and embodied in the CASSEX package so as to meet the need for automatically deciding the scope of the conjuncts:i. Syntactical constraint.The syntactical constraint has two parts:i.i The conjuncts should be of the same syntactical category;1.2 The coordinated constituent should be in conformity syntactically with the other constituents of the sentence, eg if the coordinated constituent is the subject, it should agree with the finite verb in terms of person and number.Acoording to this constraint, Ex8 should be analysed as follows (the representation is a tree diagram with 'CLAUSE' as the root and centred around the verb, with various case nodes indicating the dependency relationships between the verb and the other constituents):( CLAUSE (TYPE DCL) (QUERY NIL) (TNS PRESENT) (ASPECT PHOGRESSIVE) ( MODALITY NIL) (NEG NIL) (v (KICK ((*ANI SUBJ) ( (*PHYSOB OBJE) ( (THIS (MAN PART) ) INST) STRIK) )* (OBJECT ((BALL1 ,..)) (NLg~ER SINGLE) (QUANTIFIER SG) (DETERMINER ((DETI ONE) ) ) ( AGENT AND ((MAN ...) (NUMBER SINGLE) (QUANTIFIER SG) (DETERMINER ((DETIONE)) ) (ATTRIBUTE ((PREP (PREP WITH)) ( (CHILD ...) (NUMBER ... ) ((woMAN ... )while Ex7 (and the more general case of ExS) should be analysed roughly as: NPs whose head noun semantic nrimitives are the same should be preferred when deciding the scope of the two conjuncts coordinated by "and". However, if no such NPs can be found, NPs with different head noun semantic primitives are coordinated anyhow.Cf (Wilks 75 ).According to rule 2, Exl should be roughly represented as 'The man with (AND (telescope) (umbrella))'; Ex2, 'The man with (AND (telescope) (umbrella with a handle))'; Ex3, '(AND (man with telescope) (woman))' and Exh, '(AND (man with telescope) (woman with umbrella))' 3. Symmetry constraint.When rules i and 2 are not enough for deciding the scope of the conjuncts, as for Ex5 and Ex6, this rule of preferring conjuncts with symmetrical pre-modifiers and/or post-modifiers will be in effect:Ex5 .... with (AND (child) (woman)) ... If all the three rules above cannot help, the NP to the left of "and" which is closest to the coordinator should be coordinated with the NP immediately following the coordinator:Ex9. The man with (AND (child) (umbrella)) fell.The seemingly straightforward way for dealing with conjunctions using the ATN grammars would be to add extra WRD AND arcs to the existing states, as (Black-well 81) proposed. The problem with this method is that, as (Boguraev in press) pointed out, "generally speaking, one will need WRD AND arcs to take the ATN interpreter from just about every state in the network back toalmosteach preceding state on the same level, thus introducing large overheads in terms of additional arcs and complicated tests."Instead of adding extra WRD AND arcs to the existing states in a standard ATN gra~,nar, I set up a whole set of states to describe coordination phenomena. The first few states in the set are as follows:(CONJ/ At the moment only ((JUMP AND/) "and" is taken into (EQ (GETR CONJUNCTION)consideration. .o.)The CONJ/ states can be seen as a subgrammr which is separated from the main (conventional) ATN grezmar, and is connected with the main grammar via the interpreter.The parser works in the following way.Before a conjunction is encountered, the parser works normally except that two extra stacks are set: **NP-STACK and **PREP-STACK. Each NP, either EL-NP or EXP-NP, is pushed into **NP-STACK,together with a label indicating whether the NP in question is a subject (SUBJ) or an object (OBJ) or a preposition object (NP-IN-NMODS).The interpreter takes responsibility of looking ahead one word to see whether the word to come is a conjunction. This happens when the interpreter is processing "word-consuming" arcs, ie CAT, WRD, MEM and TST arcs. Hence no need for explicitly writing into the grammar WRD AND arcs at all.By the time a conjunction is met, while the interpreter is ready to enter the CONJ/ state, either a clause (ExlO-13) or a noun phrase in subject position (Exl-9) would have been POPed, or a verb (Exlh-15) would have been found. For the first case, a flag LEFT-PART-IS-CLAUSE will be set to true, and the interpreter will t~j to parse RIGHT-PART as a clause. If it succeeds, the representation of a sentence consisted of two coordinated clauses will be outputted. If it fails, a flag RIGHT-PART-IS-NOT-CLAUSE is set up, and the sentence will be reparsed. This time the left-part will not be treat -ed as a clause, and a coordinated NP object will be looked for instead. ExlO and Exll are examples of coordinated clauses and coordinated NP object, respectively. One case is treated specially: when LEFT-PART-IS-CLAUSE is true and RIGHT-WORD is a verb (Exl3), the subject will be copied from the left clause so that a right clause could be built.For the second case, a coordinated NP subject will be looked for. Eg, for Exh, by the time "and" is met, an I~P "the man with the telescope" would have been POPed, and the state of affairs or the **NP-STACK would be like this: After the excution of the arc ((PUSH NP) (NP-START)), RIGHT-PHRASE has been found. If it has an PP modifier, a register NMODS-CONJ will be set to the value of the modifier. Now the NPs in the **NP-STACK will be POPed one by one to be compared with the right phrase semantically. The NP whose formula head (the head of the NOUN in it) is the same as that of the right conjunct will be taken as the proper left conjunct. If the NP matched is a subject or object, then a coordinative NP subject or object will be outputted; if it is an EL-NP in a PP modifier, then a function REBUILD-SUBJ or REBUILD-OBJ, depending on whether the modified EXP-NP is the subject or the object, will be called to re-build the EXP-NP whose PP modifier should consist of a preposition and two coordinated NPs.Here one problem arises: for Ex5, the first NP to be compared with the right phrase ("the woman") would be "the man with the child" whose head noun "~usn" would be matched to "woman" but, according to our Symmetry Constraint, it is "child" that should be matched. In order to implement this rule, whenever NMODS-CONJ is empty (meaning that the right NP has no post-modifier), the **NP-STACK should be reversed so that the first NP to be tried would be the one nearest to the coordinator (in this case "the child").For the third case (LEFT-WORD is a transitive verb and the object slot is empty, Exs lh and 15), right clause will be built first, with or without copying the subject from LEFT-PART depending on whether a subject can be found in RIGHT-PART.Then, the left clause will be completed by copying the object from the right clause, and finally a clausal coordination representation will be returned.In the course of parsing, whenever a finite verb is met, the NPs at the same level as the verb and havin~ been PUSHed into the **NP-STACK should be deleted from it so that when constructing p(ssible coordinative NP object, the NPs in the subject position would not confuse the matching. Exll is thus correctly analysed.The package is written in RUTGERS-UCI LISP and is implemented on the PDP-IO computer at the University of Essex. It performs satisfactorily. However, there is still much work to be done. For instance, the most efficient way for treating reduced conjunctions is to be found.Another problem is the scope of the pre-modifiers and post-modifiers in coordinate constructions, for the resolution of which the Symmetry constraint may prove inadiquate (eg, it cannot discriminate "American history and literature" and "American histolv and physics").It is hoped that an ATN parser capable of desling with a large variety of coordinated constructions in an efficient way will finally emerge from the present work.
null
null
null
null
Main paper: introduction: One of the most complicated phenomena in English is conjunction constructions. Even quite simple noun phrases like (i) Cats with whiskers and tails are structurally ambiguous and would cause problem when translated from English to, sa~-, Chinese. Since in Chinese all the modifiers of the noun should go before it, two different translations in Chinese might be got from the above phrase:(la) (With whiskers and tails) de (cats) ("de" is a particle which connects the modifiers and the modifieds);(ib) ((With whiskers) de (cats)) and (tails).Needless to say, a machine translation system should be able to analyse correctly among ether things the conjunction constructions before high quality translation can be achieved.As is well known, ATN (Augmented Transition Network) grammars are powerful in natural language parsing and have been widely applied in various NL processing systems. However, the standard ATN grsamars are rather weak in dealing with conjunctions.In (Woods 73 ), a special facility SYSCONJ for processing conjunctions was designed and implemented in the LUNAR speech question-answering system. It is capable of analysing reduced conjunctions impressively (eg, "John drove his car through and completely demolished a plate glass window"), but it has two drawbacks: first, for the processing of general types of conjunction constructions, it is too costly and too inefficient; secondly, the method itself is highly non-deterministic and easily results in combinatorial explosions.In (Blackwell 81 ), a WRD AND arc was proposed. The arc would take the interpreter from the final to the initial state of a computation, then analyse the second argument of a coordinated construction on a second pass through the ATN network. With this method she can deal with some rather complicated conjunction constructions, but in fact a WRD AND arc could have been added to nearly every state of the network, thus making the grammar extremely bulky. Furthermore, her syste~ lacks the power for resolving the ambiguities contained in structures like (1).In the machine translation system designed by (Nagao et al 82) , when dealing with conjunctions, only the nearest two items of the same parts of speech were processed, while the following types of coordinated conjunctions were not analysed correctly:(noun + prep + noun) + and + (noun + prep + noun); (adj + noun) + and + noun. (Boguraev in press) suggested that a demon should be created which would be woken up when "and" is encountered. The demon will suspend the normal processing, inspect the current context (the local registers which hold constituents recognised at this level) and recent history, and use the information thus gained to construct a new ATN arc dynamically which seeks to recognise a constituent categorially similar to the one just completed or being currently processed. Obviously the demon is based on expectations, but what follows the "and" is extremely uncertain so that it would be very difficult for the demon to reach a high efficiency. A kind of "data-driven" alter-native which may reduce the non-determinism is to try to decide the scope of the left conjunct retrospectively by recognising first the type of the right conjunct, rather than to predict the latter by knowing the category of the constituent to the left of the coordinator which is "just completed or being currently processed" --an obscure or even misleading specification. the ball.. The man kicked the child and threw the ball.Exlh. The man kicked and threw the ball.ExlS. The man kicked and the woman threw the ball.CASSEX (Chinese Academy of Social Sciences;University of Essex) is an ATN parser based on part of the programs developed by Boguraev (1979) which was designed for the automatic resolution of linguistic ambiguities. Conjunctions, one major source of linguistic ambiguities, however, were not taken into consideration there because, as the author put it himself, "they were felt to be too large a problem to be tackled along with all the others" (Boguraev 79, 1.6).A new set of grammars has been written, and a lot of modifications has been made to the grammar interpreter, so that conjunctions could be dealt with within the ATN framework.The following are the example sentences rectly parsed by the package:Exl. The man with the telescope and the brella kicked the ball.Ex2.Ex~.Ex6.ExT.Ex8.ExlO.ExlI.ExI2.The man with the telescope and the umbrella with a handle kicked the ball.The man with the telescope and the woman kicked the ball.The man with the telescope and the woman with the umbrella kicked the ball.The man with the child and the woman kicked the ball.The man with the child and the woman with the umbrella kicked the ball.The man with the child and the woman is kicking the ball.The man with the child and the woman are kicking the ball.The man with the child and the umbrella fell.The man kicked the ball and the child threw the ball.The man kicked the ball and the child.The man kicked the child and the woman III ELEMENTARY NP AND EXPANDED NP The term 'elementary NP' is used to indicate a noun phrase which can be embedded in but has no other noun phrases embedded in it. A noun phrase which contains other, embedded, NPs is called 'expanded Np,.Thus, when analysing the sentence fr84~ment "the man with the telescope and the woman with the umbrella", we will have four elementary NPs ("the man", "the telescope", "the woman" and "the umbrella") and two expanded NPs ("the man with the telescope" and "the woman with the umbrella"). We may well have a third kind of NP, the coordinated NP with conjunction in it, but it is the result of, rather than the material for, conjunction processing, and therefore will not receive particular attention. In the text followed we will use 'EL-NP' and 'EXP-NP' to represent the two types of noun phrases, respectively.LEFT-PART will stand for the whole fragment to the left of the coordinator;andRIGHT-PART for the fragment to the right of it. LEFT-WORD and RIGHT-WORD will indicate the word immediately precedes and follows, respectively, the coordinator. The conjunct to the right of the coordinator will be called RIGHT-PHRASE.Constraints for determining the grammaticalness of constructions involving coordinating conjunctions have been suggested by linguists, among which are (Ross 67)'s CSC (Coordinate Structure Constraint), (Schachter 77)'s CCC (Coordinate Constituent Constraint), (Williams 78)'s Across-the-Board (ATB) Convention, and (Gazdar @l)'s nontransformational treatment of coordinate structures using the conception of 'derived categories'. These constraints are useful in the investigation of coordination phenomena,but in order to process coordinating structures automatically, some constraint defined from the procedural point of view is still required.The following ordered rules, named CSDC (Conjuncts Scope Determination Constraints), are suggested and embodied in the CASSEX package so as to meet the need for automatically deciding the scope of the conjuncts:i. Syntactical constraint.The syntactical constraint has two parts:i.i The conjuncts should be of the same syntactical category;1.2 The coordinated constituent should be in conformity syntactically with the other constituents of the sentence, eg if the coordinated constituent is the subject, it should agree with the finite verb in terms of person and number.Acoording to this constraint, Ex8 should be analysed as follows (the representation is a tree diagram with 'CLAUSE' as the root and centred around the verb, with various case nodes indicating the dependency relationships between the verb and the other constituents):( CLAUSE (TYPE DCL) (QUERY NIL) (TNS PRESENT) (ASPECT PHOGRESSIVE) ( MODALITY NIL) (NEG NIL) (v (KICK ((*ANI SUBJ) ( (*PHYSOB OBJE) ( (THIS (MAN PART) ) INST) STRIK) )* (OBJECT ((BALL1 ,..)) (NLg~ER SINGLE) (QUANTIFIER SG) (DETERMINER ((DETI ONE) ) ) ( AGENT AND ((MAN ...) (NUMBER SINGLE) (QUANTIFIER SG) (DETERMINER ((DETIONE)) ) (ATTRIBUTE ((PREP (PREP WITH)) ( (CHILD ...) (NUMBER ... ) ((woMAN ... )while Ex7 (and the more general case of ExS) should be analysed roughly as: NPs whose head noun semantic nrimitives are the same should be preferred when deciding the scope of the two conjuncts coordinated by "and". However, if no such NPs can be found, NPs with different head noun semantic primitives are coordinated anyhow.Cf (Wilks 75 ).According to rule 2, Exl should be roughly represented as 'The man with (AND (telescope) (umbrella))'; Ex2, 'The man with (AND (telescope) (umbrella with a handle))'; Ex3, '(AND (man with telescope) (woman))' and Exh, '(AND (man with telescope) (woman with umbrella))' 3. Symmetry constraint.When rules i and 2 are not enough for deciding the scope of the conjuncts, as for Ex5 and Ex6, this rule of preferring conjuncts with symmetrical pre-modifiers and/or post-modifiers will be in effect:Ex5 .... with (AND (child) (woman)) ... If all the three rules above cannot help, the NP to the left of "and" which is closest to the coordinator should be coordinated with the NP immediately following the coordinator:Ex9. The man with (AND (child) (umbrella)) fell.The seemingly straightforward way for dealing with conjunctions using the ATN grammars would be to add extra WRD AND arcs to the existing states, as (Black-well 81) proposed. The problem with this method is that, as (Boguraev in press) pointed out, "generally speaking, one will need WRD AND arcs to take the ATN interpreter from just about every state in the network back toalmosteach preceding state on the same level, thus introducing large overheads in terms of additional arcs and complicated tests."Instead of adding extra WRD AND arcs to the existing states in a standard ATN gra~,nar, I set up a whole set of states to describe coordination phenomena. The first few states in the set are as follows:(CONJ/ At the moment only ((JUMP AND/) "and" is taken into (EQ (GETR CONJUNCTION)consideration. .o.)The CONJ/ states can be seen as a subgrammr which is separated from the main (conventional) ATN grezmar, and is connected with the main grammar via the interpreter.The parser works in the following way.Before a conjunction is encountered, the parser works normally except that two extra stacks are set: **NP-STACK and **PREP-STACK. Each NP, either EL-NP or EXP-NP, is pushed into **NP-STACK,together with a label indicating whether the NP in question is a subject (SUBJ) or an object (OBJ) or a preposition object (NP-IN-NMODS).The interpreter takes responsibility of looking ahead one word to see whether the word to come is a conjunction. This happens when the interpreter is processing "word-consuming" arcs, ie CAT, WRD, MEM and TST arcs. Hence no need for explicitly writing into the grammar WRD AND arcs at all.By the time a conjunction is met, while the interpreter is ready to enter the CONJ/ state, either a clause (ExlO-13) or a noun phrase in subject position (Exl-9) would have been POPed, or a verb (Exlh-15) would have been found. For the first case, a flag LEFT-PART-IS-CLAUSE will be set to true, and the interpreter will t~j to parse RIGHT-PART as a clause. If it succeeds, the representation of a sentence consisted of two coordinated clauses will be outputted. If it fails, a flag RIGHT-PART-IS-NOT-CLAUSE is set up, and the sentence will be reparsed. This time the left-part will not be treat -ed as a clause, and a coordinated NP object will be looked for instead. ExlO and Exll are examples of coordinated clauses and coordinated NP object, respectively. One case is treated specially: when LEFT-PART-IS-CLAUSE is true and RIGHT-WORD is a verb (Exl3), the subject will be copied from the left clause so that a right clause could be built.For the second case, a coordinated NP subject will be looked for. Eg, for Exh, by the time "and" is met, an I~P "the man with the telescope" would have been POPed, and the state of affairs or the **NP-STACK would be like this: After the excution of the arc ((PUSH NP) (NP-START)), RIGHT-PHRASE has been found. If it has an PP modifier, a register NMODS-CONJ will be set to the value of the modifier. Now the NPs in the **NP-STACK will be POPed one by one to be compared with the right phrase semantically. The NP whose formula head (the head of the NOUN in it) is the same as that of the right conjunct will be taken as the proper left conjunct. If the NP matched is a subject or object, then a coordinative NP subject or object will be outputted; if it is an EL-NP in a PP modifier, then a function REBUILD-SUBJ or REBUILD-OBJ, depending on whether the modified EXP-NP is the subject or the object, will be called to re-build the EXP-NP whose PP modifier should consist of a preposition and two coordinated NPs.Here one problem arises: for Ex5, the first NP to be compared with the right phrase ("the woman") would be "the man with the child" whose head noun "~usn" would be matched to "woman" but, according to our Symmetry Constraint, it is "child" that should be matched. In order to implement this rule, whenever NMODS-CONJ is empty (meaning that the right NP has no post-modifier), the **NP-STACK should be reversed so that the first NP to be tried would be the one nearest to the coordinator (in this case "the child").For the third case (LEFT-WORD is a transitive verb and the object slot is empty, Exs lh and 15), right clause will be built first, with or without copying the subject from LEFT-PART depending on whether a subject can be found in RIGHT-PART.Then, the left clause will be completed by copying the object from the right clause, and finally a clausal coordination representation will be returned.In the course of parsing, whenever a finite verb is met, the NPs at the same level as the verb and havin~ been PUSHed into the **NP-STACK should be deleted from it so that when constructing p(ssible coordinative NP object, the NPs in the subject position would not confuse the matching. Exll is thus correctly analysed.The package is written in RUTGERS-UCI LISP and is implemented on the PDP-IO computer at the University of Essex. It performs satisfactorily. However, there is still much work to be done. For instance, the most efficient way for treating reduced conjunctions is to be found.Another problem is the scope of the pre-modifiers and post-modifiers in coordinate constructions, for the resolution of which the Symmetry constraint may prove inadiquate (eg, it cannot discriminate "American history and literature" and "American histolv and physics").It is hoped that an ATN parser capable of desling with a large variety of coordinated constructions in an efficient way will finally emerge from the present work. Appendix:
null
null
null
null
{ "paperhash": [ "huang|dealing_with_conjunctions_in_a_machine_translation_environment", "berwick|a_deterministic_parser_with_broad_coverage", "nagao|an_english_japanese_machine_translation_system_of_the_titles_of_scientific_and_engineering_papers", "mccord|slot_grammars", "sag|deletion_and_logical_form", "schachter|constraints_on_coordination", "ross|constraints_on_variables_in_syntax" ], "title": [ "Dealing With Conjunctions in a Machine Translation Environment", "A Deterministic Parser With Broad Coverage", "An English Japanese Machine Translation System of the Titles of Scientific and Engineering Papers", "Slot Grammars", "Deletion And Logical Form", "Constraints on coordination", "Constraints on variables in syntax" ], "abstract": [ "A set of rules, named CSDC (Conjunct Scope Determination Constraints), is suggested for attacking the conjunct scope problem, the major issue in the automatic processing of conjunctions which has been raising great difficulty for natural language processing systems. Grammars embodying the CSDC are incorporated into an existing ATN parser, and are tested successfully against a wide group of \"and\" conjunctive sentences, which are of three types, namely clausal coordination, phrasal coordination, and gapping. With phrasal coordination the structure with two NPs coordinated by \"and\" has been given most attention.It is hoped that an ATN parser capable of dealing with a large variety of conjunctions in an efficient way will finally emerge from the present work.", "This paper is a progress report on a scries of three significant extensions to the original parsing design of (Marcus J980).* The extensions are: Ihe range of syntactic phenomena handled has been enlarged, encompassing sentences with Verb Phrase deletion, gapping, and rightward movement, and an additional output representation of anaphor-antcccdcnt relationships has been added (including pronoun and quantifier interpretation). A complete analysis of the parsing design has been carried out, clarifying the parser's relationship to the extended I R(k,t) parsing method as originally defined by (Knuth 1965) and explored by (Szymanski and Williams 1976). The formal model has led directly to the design of a \"stripped down\" parser that uses standard LR(k) technology and to results about the class of languages that can be handled by Marcus-style parsers (briefly, the class of languages is defined by those that can be handled by a deterministic, two-stack push-down automaton with severe restrictions on the transfer of material between the two sucks, and includes some strictly context-sensitive languages). 1 EXTENDING THE MARCUS PARSER While the Marcus parser handled a wide range of everyday syntactic constructions, there are many common English sentences that it could not analyze. One gap in its abilities arises because it did not have a way to represent the possibility of rightward movement that is, cases where a constituent is displaced to the right: A book [about nuclear disarmament] appeared yesterday. --> A book appeared yesterday [about nuclear disarmament]. Further, the only way that the Marcus parser could handle leftward movement was via the device of linking a \"dummy variable\" (a trace) to an antecedent occurring somewhere earlier in the sentence. For instance, the sentence, \"Who did Mary kiss?\" is parsed as, Who did Mary kiss trace!, where trace is a variable bound to its \"value\" of who, indicating the intuitive meaning of the sentence, \"For which X, did Mary kiss X\" . Jn the original parser design, a trace was of the category NP, so that only Noun Phrases could be linked to traces. But this meant that sentences where other than NPs are displaced or deleted cannot be analyzed. This includes the following kinds of sentences, where deleted material is indicated in square brackets.", "The title sentences of scientific and engineering papers are analyzed by simple parsing strategies, and only eighteen fundamental sentential structures are obtained from ten thousand titles. Title sentences of physics and mathematics of some databases in English are translated into Japanese with their keywords, author names, journal names and so on by using these fundamental structures. The translation accuracy for the specific areas of physics and mathematics from INSPEC database was about 93%.", "This paper presents an approach to natural language grammars and parsing in which slots and rules for filling them play a major role. The system described provides a natural way of handling a wide variety of grammatical phenomena, such as WH-movement, verb dependencies, and agreement.", "Thesis. 1976. Ph.D.--Massachusetts Institute of Technology. Dept. of Foreign Literatures and Linguistics.", "1. 'The fundamental aim in the linguistic analysis of a language L', according to Chomsky (1957:13), 'is to separate the GRAMMATICAL sequences which are the sentences of L from the UNGRAMMATICAL sequences which are not sentences of L, and to study the structure of the grammatical sequences.' While this aim is obviously somewhat utopian, the research results of recent years have shown that even a limited achievement of it, in connection with some small subset of the sentences of a language, may be of considerable interest both in itself and in its consequences for general linguistic theory. In the present paper, I shall try to provide further evidence to this effect through an investigation of certain determinants of grammaticalness for constructions involving coordinating conjunctions, primarily constructions involving Eng. and.", "Massachusetts Institute of Technology. Dept. of Modern Languages and Linguistics. Thesis. 1967. Ph.D." ], "authors": [ { "name": [ "Xiuming Huang" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Berwick" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. Nagao", "Junichi Tsujii", "K. Yada", "Toshihiro Kakimoto" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Michael C. McCord" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "I. Sag" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Paul Schachter" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Ross" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null ], "s2_corpus_id": [ "267852350", "12595499", "1074066", "6973469", "60884912", "123236144", "60624374" ], "intents": [ [], [], [], [], [ "background" ], [], [ "background" ] ], "isInfluential": [ false, false, false, false, false, false, false ] }
Problem: The paper addresses the conjunct scope problem in the automatic processing of conjunctions, which poses challenges for natural language processing systems. Solution: The paper proposes a set of rules called CSDC (Conjunct Scope Determination Constraints) to tackle the conjunct scope problem. These rules are incorporated into an existing parser and successfully tested against various types of conjunctive sentences, aiming to develop an ATN parser capable of efficiently handling a wide range of conjunctions.
497
0.052314
null
null
null
null
null
null
null
null
333d2dee2d8e3de325a87b49cce90e79402be744
17161699
null
A {P}rolog Implementation of {L}exical {F}unctional {G}rammar as a Base for a Natural Language Processing System
W-Germany O. ABSIRACr ~ne aim of this paper is to present parts of our system [2] , which is to construct a database out of a narrative natural la~ text. We think the parts are of interest in their o~. The paper consists of three sections: (I) We give a detailed description of the PROLOG implementation of the parser which is based on the theory of lexical functional grammar (I/V.). The parser covers the fragment described in [1,94]. I.e., it is able to analyse constructions involving functional control and long distance dependencies. We will to show that -PROLOG provides an efficient tool for LFG-implementation: a phrase structure rule annotated with ftmctional schemata like~ M~ w~is ~^ be interpreted as, first, identifying the special grmr, m/tical relation of subject position of any sentence analyzed by this clause to he the h~ appearing in it, and second, as identifying all g~,~mtical relations of the sentence with those of the VP. This ~iversal interpretation of the ~tavariables ~ and & corresponds to the universal quantification of variables appearing in PROl/~uses. The procedural ssm~ntios of PROLOG is such that the instantietion of the ~ariables in a clause is inherited from the instantiation given by its subgoals, if they succeed. Thus there is no need for a separate component which solves the set of equations
{ "name": [ "Frey, Werner and", "Reyle, Uwe" ], "affiliation": [ null, null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
4
18
null
null
null
O. ABSIRACr ~ne aim of this paper is to present parts of our system [2] , which is to construct a database out of a narrative natural la~ text. We think the parts are of interest in their o~. The paper consists of three sections: (I) We give a detailed description of the PROLOG implementation of the parser which is based on the theory of lexical functional grammar (I/V.). The parser covers the fragment described in [1, 94] .I.e., it is able to analyse constructions involving functional control and long distance dependencies. We will to show that -PROLOG provides an efficient tool for LFG-implementation: a phrase structure rule annotated with ftmctional schemata like~ M~ w~is ~^ be interpreted as, first, identifying the special grmr, m/tical relation of subject position of any sentence analyzed by this clause to he the h~ appearing in it, and second, as identifying all g~,~mtical relations of the sentence with those of the VP. This ~iversal interpretation of the ~tavariables ~ and & corresponds to the universal quantification of variables appearing in PROl/~uses. The procedural ssm~ntios of PROLOG is such that the instantietion of the ~ariables in a clause is inherited from the instantiation given by its subgoals, if they succeed. Thus there is no need for a separate component which solves the set of equations obtained by applying the I/G algorithm.-there is a canonical way of translati~ LFG into a PROLOG progz~,~. (II) For the se~ntic representation of texts we use the Discourse Representation q]neory developped by Psns [,a~p . At present the implerentation includes the fragment described in [4] .In addition it analyses different types of negation and certain equi-and raising-verbs. We postulate some requirenents a semantic representation has to fulfill in order to he able to analyse whole texts. We show how K~p's theory meets these requirements by analyzing sample disconrses involving amaphoric ~'s. (III) Finally we sketch how the parser formalism ca~ be augmented to yield as output discourse representation structures. To do this we introduce the new notion of 'logical head' in addition to the LFG notion of 'grmmmtical head'. reason is the wellknown fact that the logical structure of a sentence is induced by the determiners and not by the verb which on the other hand determines the thenatic structure of the sentence. However the verb is able to restrict quantifier scope anbiguities or to induce a preference ordering on the set of possible quantifier scope relations. ~-erefore there must he an interaction between the grammatical head and the logical head of a phrase.A main topic in AI research is the interaction between different components of a systen.But insights in this field are primarily reached by experience in constructing a complem system. Right frcm the beginning, however, one should choose formalisms which are suitable for a s~nple and transparent transportion of information. We think LFG meets this requirenent.The formalism exhibiting the analysis of a sentence c~ he expanded in a simple way to contain entries which are used during the parse of a whole text, for example discourse features like topic or domain dependent knowledge conming from a database associated with the lexicon. Since I/G is a kind of u~_ification grammar it allows for constructing patterns which enable the following sentences to refine or to change the content of these disc~irse features. Knowledge gathered by a preceding sentence can he used to lead the search in the lexicon by demanding that certain feature values match. In short we hope that the nearly tmiform status of the different description tools allows simple procedures for the expansion and mani~Llation by other components of the syst~n. But this was a look ahead.Let us mow come to the less a~bitious task of implementing the grmmmr of [i,~4] . iexical functional g~ (LFG) is a theory that extends phrase structure ~L~,mrs without using transformations. It ~nphasizes the role of the grammatical f~Ictions and of the lexicon. Another powerful formalism for describing natural languages follows from a method of expressing grammars in logic called definite clause gz~,srs (DOG). A DOG constitutes a PROIOG programne. We %~nt to show first, how LFG can he tr-amslated into DOG and second, that PROLOC provides an efficient tool for I/D-Implementation in that it allows for the construction of functional structures directly during the parsing process. I.e. it is not necessary to have seperate components which first derive a set of f~mctional equations from the parse tree and secondly generate an f-str~ture by solving these equations. Let us look at an example to see how the LFG machinery works. We take as the sample sentence "a w~man expects an anerican to win'. ql%e parsing of the sentence proceeds along the following lines.~ne phrase structure rules in (i) generate the phrase structure tree in (2) (without considering the schemata written beneath the rule elements).EQUATION..I. s ~_~ v PFET N a worn expects & ~me~'ioan to win the c-stru~ture will be annotated with the functional schemata associated with the rules . ~he schemata found in the lexical entries are attached to the leave nodes of the tree. ~his is shown in (3). Then the tree will he Indexed. ~e indices instantiate the upand down-arrows. An up-arrow refers to the node dominating the node the schema is attached to. A d~n-~ refers to the node which carries the f~ctlonal schema. Tns result of the instantiation process is a set of ftmctional equations.)= 4, 1 1 (*SPEC)=A (+NLM)=SG (~NU'O=SG (+Gm)=F~ (~PmS)=3 (~mZD)='~ndAN" V NP VP" l~r N VP 1 (~S~EC)=~ (4m0---SC (+NU~)=SG 4%~mS)=3 (+PRED)= ' ~RICAN" (¢ reED)=" E~ECT<(SUBJ) ( X~)>( OBJ)' (4 ~ENSE)=mES \ ~=~ V (~ suBJ ~M)=SG (÷mED)='Wn~(SUBJ)>' (~S[mJ ProS)=3 4+xcem su~J)=(osJ)We have listed part of them in 44). TOe solving of these equations yields the f~ctional str~zture in (5).ED "l,~l~/'r' ~ 3 NINSG reED "EX~ECT<(SU~) ( XCmP)> ( O~J)" A ~mED 'A~m~C~ NU~ SG~ )It is composed of grammtical ftmction naras, s~antic forms and feature symbols. The crucial elements of LFG (in contrast to transformational g~n.ar)are the grammticel functiens like SL~J, OBJ, XCCMP and so on. The fu%ctional structure is to he read as containing pointers frem the functio~ appearing in the semantic forms to the corresponding f-structures. The ~,,atical functions assumed by LFG are classified in subcategorizable (or governable) and nonm~*zategorizable functions. TOe subcategorizable ones are those to which lexlcal items can make reference.TOe item "expects' for e~smple subcategorizes three functions, but only the material inside the angled brackets list the predicate's smmntic arguments. X{I~P and XAIU are ~/~e only open grammtical functions, i.e. ,they can denote functionally controlled clauses.In our exhale this phenomena is lexically induced by the verb "expects'. Tnis is expressed by its last sch~mm "(%XC[~P SUBJ)=(@OBJ)". It has the effect that the 0]~of the sentence will becmme the SUBJ of the XC~MP, that me.%ns in our example it becomes the argument of d~e predicate 'win'. Note that the analysis of the sentence "a woman promises an ~merlcan to win" would differ in two respects. First the verb 'prcmlses' lists all the three ft~ctions subcategorized by it in its sem~ntlc argument structure. And second 'premises" differs from "expects' just in its f~mctional control schema, i.e., here we find the equation "(#X{~MP SUBJ)=(A~SLBJ) '' yielding an arrow from the SL~J of the XC~MP to the SUBJ of the sentence in the final f-structure. An f-structure must fulfill the following conditions in order to be a solution -uniqueness: every f-nane which has a value has a ~ique value -completeness:the f-structure must contain f-values for all the f-na~es subcategorized by its predicate -coherence: all the subcate~orizable fzmctions the f-structure contains must be ~tegorised by its predicate The ability of lexical irons to determine the features of other items is captured by the trivial equations. Toey propagate the feature set which is inserted by the lexical item up the tree. For e~mple the features of the verb become features of the VP end the features of the VP become features of S. The ~llqueness principle guarantees that any subject that the clause contains will have the features required by the verb. The trivial equation makes it also possible that a lexical item, here the verb, can induce a f~mctional control relationship he~ different f-structures of the sentence. An ~mportant constraint for all references to ftmctions and fonctional features is the principle of f~mctional locality: designators in lexical and grmm~tical schemata can specify no more than two iterated f~mction applications. Our claim is t|mt using DCG as a PROLOG programe the parsing process of a sentence according to the LFG-theory can be done more efficiently by doing all the three steps described above simultaneously. Why is especially PROLOG useful for doing this? In the a;motated e-structure of the LFG theory the content of the f~mctional equations is only '"~wn" by the node the equation is annotated to and by the immediately dominating node. The memory is so to speak locally restricted. Thus during the parse all those bits of info~tion have to be protocolled for so~e other nodes. This is done by means of the equations. In a PROIOG programme however the nodes turn into predicates with arEun*~ts.Tns arguments could be the same for different predicates within a clause. Therefore the memory is '~orizentall~' not restricted at all. Furthermore by sharing of variables the predicates which are goals ca~ give infon~tion to their subgoals.In short, once a phrase structure grammr has been translated into a PROIOG pragraune every node is potentially able to grasp information from any other node. Nonetheless the parser we get by embedding the restricted LFG formalism Into the highly flexible r~G formalism respects the constraints of Lexlcal ftmctlonal granular. Another important fact is that LFG tells the PROIOG programmer in an exact manner what information the purser needs at which node and just because this information is purely locally represented in the LFG formalism it leads to the possibility of translating 12G into a PROLOG programme in a ca~mical wey. We have said that in solving the equations LFG sticks together informations ¢mmiog from different nodes to build up the final output. To mirror this the following PROLOG feature is of greatest importance. For the construction of the wanted output during the parsing process structures can he built up piecsneal, leaving unspecified parts as variables. The construction of the output need not he strictly parallel to the application of the corresponding rules. Variables play the role of placeholders for structures which are found possibly later in the parsing process. A closer look at the verb entries as formulated by LFG reveals that the role of the f~mction names appearing there is to function as placeholders too. To summarize: By embedding the restricted LFG formalism into the hlgly flexible definite clause grammr fonmg/ismwemake llfe easier. Nonetheless the parser we get respects the constraints which are formulated by the LFG theory. Let us now consider some of the details. Xhe n~les under (i) are transformed into the PROLOG programme in (6). (* indicates the variables.) (6) S (*el0 *ell *outps) <--NP (*el0 *c12 *featnp *outpnp) VP (*c12 *ell (SIBJ (*outpnp *featnp)) T~ *outpa) VP (*clO *ell *outpsubj *featv *outps) <-v (*cent (~o~mmb/~) *leafy *outps) F~/~IP (*el0 *¢12 OBJ ~ *ill)Ifun£tional FA(~ (*¢12 *c13 OBJ2 ~ *~) controll FAf~=P (*el3 *el40BL ~ *~) FA¢~" (*¢14 *ell *oont xcem ~ nil) l i~iAst~ FAOJP' (*clO *ell (*gf *cont) *gf ) . *i0) *10) ~-VP" (*¢I0 *ell *cont *outpxcomp) NP (*el0 *ell *ontpnp) <- lET (*el0 *¢ii *ontpdet) N (*outpdet *outpnp)We use the content of the function assigning equations to build up parts of the whole f-structure during the parsing process. Crur~al for this is the fact dmt every phrase has a ~mique category, called its head, with the property that the functional features of each phrase are identified with those of its head. The head category of a phrase is characterized by d~e assignment of the trivial ft~%ctional-equation and by the property of being a major category, ql%e output of each procedure is constructed by the subprocedure corresponding to the head. ~ means that all information resulting from the other subprooedures is given to that goal. ll~is is done by the 'outp' variables in the programme.ThUS the V procedure builds up the f-structure of the sentence. Since VP is the head of the S rule the VP procedure has an argument variable for the SUB7 f-structure. Since V is the head of the VP rule this variable together with the structures coming fore the sister nodes are given to V for the construction of the final output.As a consequence our output does not contain pointers in contrast to Bresnan' s output. Rather the argument positions of the predicates are instantiated by the indicated f-stmmtures. For each category there is a fixed set of features, l~e head category is able to impose restrictions on a fixed subset of that feature set. This subset is placed on a prominent position, l~e corresponding feature values percolating up towmrds the head category will end up in the sate position d&~anding that their values agree. Tois is done by the ' feat" variables. The ~aiqueneas condition is trivially fulfilled since the passing around of parts of the f-structure is done by variables, and PROIOG instantiates a variable with at most one value.. (7) V ( (V(KEP (SL~J (*outpobj *featobj)))Ifenctional control] ((S[BJ (*outpsubj (SG 3))) ~ Icheck listl (OBJ (*outpobj *featobJ)) (XC~MP *outpxcomp)) +'-I output I ((TK~SE m~) (reED "EXPECt (*outpaubj *outpxcemp)')) ) ~he checking of the completeness and coherence condition is done by the Verb procedure. (7) shows the PROLOG assertion corresponding to the lexical entry for 'expects'.In every assertion for verbs there is a list containing the g~=m~,~tical ftmctions subcategorized by the verb. This is the second argument in (7), called "check list'. ~ list is passed around during the parse. ~lis is done by the list umderlined with waves in (6). Every subcategorlzable f~action appearing in the sentence must be able to shorten the llst. Tnis guarantees coherence.In the end the list must have diminished to NIL. This guarantees completene&s. As can be seen in (7) a by-product of this passing around the check list is to bring the values of the grammtical functions subcategorized by the verb down to the verb's predicate argument structure. To handle famctional control the verb entry contains an argument to encode the controller. Ibis is the first argument in (7). lhe procedure ~li.ch delivers XC~MP (here the VP" procedure) receives d~is variable (the underlined variable *cont in (6)) since verbs can induce ft~ctional control only upon the open grammtical famction XOCMP.For toug~ement constructions the s-prime procedure receives the controller variable too. But inside this clause the controller must be put onto the long distance controller list, since SCCMP is not an open grammatical function. That leads us to the long distance dependencies (8) The glrl wonders whose playmate's nurse the baby saw.S" Toe superscript S of the one controller indicates that the corresponding controlee has to be found in a S-rooted control domain whereas the [-kwh] controlee for the other controller has to be found beneath a ~ node. Finally the box around the S-node reeds to be explained. It indicates the fact that the node is a boLmding node. Kaplan/Bresnan state the following convention A node M helor~s to a control domain with root node R if and only if R dominates M and there are no bo~iding nodes on the path from M up to but not including R.--> NP .p [] (+Focns)=~ (10) / s NP /VP~ V S' ~,,~ ~ N NP VP \ i Y-k I IX / \ , .il~ .~_ N I IET N VThe girl wondered what the m~se asked who saw Long distance control is haldle by the programme using a long distance controller list, enriched at some special nodes with new oontrollers, passed down the tree and not allowed to go further at the bounding nodes. S ((*oL!t~np*f_eatnj~ !S_N~)) ~ *outpsc) Every time a controlne is found its subscript has to match the corresponding entry of the first menber of the controller list. If this happens the first element will be deleted from the list. The fact that a controlee can only match the first elenent reflects the crossed dependency constraint. *clO is the input controller variable of the S" procedure in (12). *cll is the output variable. *clO is expanded by the [4wh] controller within the NP subgoal. This controller must find its controllee during d~e e~ecution of the NP goal.Note that the output variable of the NP subgoal is identical with the output variable of the main goal and that the subgoal S" does have different controller lists. ~ reflects the effect of the box aroLmd the S-node, i.e. no controller coming do,retards can find its controlee inside the S-prncedure. l~e only controller going into the S goal is the one introduced below the NP node with dnmsln root S. Clearly the output variable of S has to be nil. There are rules which allow for certain controllers to pass a boxed node Bresna~Kaplan state for example the rule in (13).(13) s" --> (nhat) sThis rule has the effect that S-rooted contollers are allowed to pass the box. Here we use a test procedure which puts only the contollers iedexed by S onto the controller list going to the S goal. ~ereby we obtain the right treatment of sentence (14). (14) the girl wondered who John believed that Mary claimed that the baby saw . In a corres~eding manner the complex NP 'whose playmate's nurse" of sentence (8) is analysed.As senantic representation we use the D(iscourse) R(epresentation) T(heory) developped by Hans Yamp [4] . I.e. we do not adopt the semantic theory for L(exical) F(unctional) C~rammr) proposed by Per-Kristian Halverson [2] . Halverson translates the f~nctional structures of LFG into so-called semantic structures being of the same structural nature, namely scyclic graphs. The semlntin structures are the result of a translation procedure which is based on the association of formulas of intensional logic to the semantic forms appearing in the functional structure. The reason not to take this approach will be explained by postulating some requirements a se~anclc representation has to fulfill in order to account for a processing of texts. Tnen we will show that these requlr~ents are rP~I]y necessary by analysing some sample sente,ces and discourses.It will turn out that ~T accoante for them in an intuitively fully satisfactory ~y. Because we cannot review [RT in detail here the reader should consult one of the papers explaining the ftmdanentals of the theory (e.g. [~] ), or he should first look at the last paragraph in which an outline is given of how our parser is to be extended in order to yield an IRS-typed output -instead of the 'traditional' (semantic) flmctional structures. The basic building principle of a semantic representation is to associate with every signlfic2mt lexical entry (i.e., every entry which does contribute to the truthcondldtlonsl aspect of the meaning of a sentence) a semantic structure. Compositional principles, then, will construct the semantic representation of a sentence by combining these se~antlc structures according to their syntactic relations. The desired underlying principle is that the smmntlc structures associated with the semantic forms should not be. changed during the composition process. To vat it dif6erently:one ~nts the association of the semantic structures to be independent of the syntactic context in which the semantic form appears. This requirement leads to difficulties in the tradition of translating sentences into formulas of e.g. predicate or intentional logic. Consider sentences (I) If Johe admires a woman then he kisses her and (2) Every man who a~ires a woman kisses her the truth conditions of which are determined by the first order fommlas respectively. ~le problem is that the definite description "a woman" reemerges as universally quantified in the logical representation-and there is no way out, because the prono~m "she" has to be boLmd to the wommn in question. I~T provides a general acco~mt of the meaning of indefinite descriptions, conditionals, tmiversally quantified noun phrases and anaphoric pronoun, s.t. our first requirement is satisfied. 1~e semantic represEmtations (called nRs's) which are assigned to sentences in which such constructions jointly appear have the truth conditions which our intuitions attribute to them. The second reas~ why we decided to use I~R as semantic formalism for LFG is that the constraction principles for a sentence S(i) of a text D = S(1), .... S(n) are fozmulated with respect to the semantic representation of the prec~Ing text S(1),... ,S(i-l).1~erefore the theory can accotmt for intersentential semantic relationships in the same way as for intrasentential ones. ~ is the second requirement: a s~antic representation has to represent the discourse as a whole and not as the mere union of the s~antic representations of its isolated sentences. A third requirenent a senantlc representation has to fulfill is the reflection of configurational restrictions on anaphoric links: If one embeds sentence (2) into a conditional (6) *If every man who admires a woman kisses her then she is stressed the anaphoric link in (2) is preserved.But (6) does -for configurational reasons -not allow for an anaphoric relation between the "she" and "a woman". The same happens intersententially as shown by (7) If Jo~m admires a woman tl~n he kisses her. *She is enraged. A last requirement we will stipulate here is the following: It is neccessary to draw inferences already during the construction of the semantic representation of a sentence S(i) of the discourse.The inferences must operate on the semantic representation of the already analyzed discourse S(1),... ,S(i-l) as well as on a database containing the knowledge the text talks about. ~ requirement is of major importance for the analysis of definite descriptions. Consider (8) Pedro is a farmer. If a woman loves him then he is happy. Mary loves Pedro. The happy farmer marries her in which the definite description "the happy farme•' is used to refer to refer to the individual denoted by "Pedro". In order to get this llnk one has to infer that Pedro is indeed a happy farmer and that he is the only ore. If this were not the case the use of the definite description would not he appropriate. Such a deduction mechanism is also needed to analyse sentence (9) John bought a car. the engine has 160 horse powers In this case one has to take into account some ~nowledge of the world, nanely the fact that every car has exactly one engine. To illustrate the ~y the s~mmtic representation has to be interpreted let us have a brief look at the text-~RS for the sample discourse (8)[ Pedrou v love(v,u) I leve(y,u) I~u,v)ThUS a IRS K consists of (i) a set of discourse referents: discourse individuals, discourse events, discourse propositions, etc. (il) a set of conditions of the following types -atomic conditions, i.e. n-ary relations over discourse referents -complex conditions, i.e. n-ary relations (e.g.--> or :) over sub-~S's and discourse referents (e.g. K(1) --> K(2) or p:K, where p is a discourse proposition) A whole ~S can be tmderstoed as partial model representing the individuals introduced by the discourse as well as the facts and rules those individuals are subject to. The truth conditions state that a IRS K is true in a model M if there is a proper imbedding from K Into M. Proper embedding is defined as a f~mction f from the set of discourse referents of K in to M s.t. (i) it is a homomorphism for the atomic conditions of the IRS and (il) -for the c~se of a complex condition K(1) --> I((2) every proper embedding of K(1) that extends f is extendable to a proper embedding of K(2).for the case of a complex condition p:K the modelthenretlc object correlated with p (i.e. a proposition if p is a discourse proposition, an event if p is a discourse event, etc.) must be such that it allows for a proper embedding of K in it. Note that the definition of proper embedding has to be made more precise in order to adapt it to the special s~nantica one uses for propositional attitudes. We cannot go into details bare. Nonet/~lese the truth condition as it stands should make clear the following: whether a discourse referent introduced implies existence or not depends on its position in the hierarchy of the IRS's. C/ven a nRS which is true in M then eactly those referents introduced in the very toplevel [RS imply existence; all others are to he interpreted as ~iversally quantified, if they occur in an antecedent IRS, or as existentially quantified if they occur in a consequent BRS, or as having opaque status if they occur in a ~S specified by e.g. a discourse proposition. Tnus the role of the hierarchical order of the BRS's is to build a base for the definition of truth conditions. But furthemnore the hierarchy defines an accessibility relation, which restricts the set of possible antecedents of anaphorie NP's.Ibis aceessibiltity relation is (for the fra~nent in [~]) defined as follows: For a given sub-ERS K0 all referents occurring in NO or in any of the n~S's in which NO is embedded are accessible. Furthermore if NO is a consequent-~S then the referents occurring in its corresponding antecedent I]~S on the left are accessible too. This gives us a correct trea~aent for (6) and (7). For the time being -we have no algorithm which restricts and orders the set of possible anaphorie antecedents ~-*-ording to contextual conditions as given by e.g. (5) John is reading a book on syntax and Bill is reading a book on s~-oatics o a paperback J Therefore our selection set is restricted only by the accessibility relation and the descriptive content of the anaphoric NP" s. Of course for a~apheric pronouns this content is reduced to a minimum, namely the grm~rstical features associated to them by the lexical entries. This accounts e.g. for the difference in acceptability of (I0) and (II). (I0) Mary persuaded every man to shave |dmself (II) *~4ary promised every man to shave himself The ~S's for (i0) and (II) show that beth discourse referents, the one for '~r~' and the one for a '~an", are accessible from the position at which the reflexive prex~an has to be resolved. But if the '~dmselP' of (ii) is replaced by x it cannot he identified with y having the (not explicitely shown) feature female.Ii0")I Y *~')/ / mary = y / ipers~de(y~,p)l / ~ prom~(y~,p)Definite dese~tue of the semantic content of their co,mon-noun-phrases and the existence and ~niqeeness conditions presupposed by th~n. "~erefore in order to analyse definite descriptions we look for a discourse referent introduced in the preceding IRS for which the description holds and we have to check whether this descrition holds for one referent only. Our algorithm proceeds as follows: First we build up a small IRS NO encoding the descriptive content of the common-no~-phrase of the definite description together with its ~miqlmess and existency condition:El): x farmer(x) happy(x) Y I L happy(y) _],%econd we have to show that we can prove I<0 out of the text-nRS of the preceeding discourse , with the restriction that only accessible referents are taken into account. The instantiation of *x by this proof gives us the correct anteoedent the definite description refers to. Now we forget about NO and replace the antecedent discourse referent for the definite noun phrase to get the whole text-IRS (8'). Of course it is possible that the presuppositions are not mentioned explicitely in the discourse but follow implicitely from the text alone or from the text together with the knowledge of the domain it talks about. So in cases like (9) John bought a car. The engine has 260 horse powers Pere the identified referent is functionally related to referents that are more directly accessible, nmne_ly to John's car. Furthermore such a functional dependency confers to a definite description the power of introducing a new discourse referent, nanely the engine which is functionally determined by the car of which it is part. ~ shifts the task from the search for a direct antecedent for "the engine" to the search for the referent it is f%mctionelly related to. But the basic mechanism for finding this referent is the same deductive mechanism just outlined for the '~lappy farme~" example.~ "GRAMMATICAL PARSIAK~' AND "lOGICAL P~RSIN~' In this section we will outline the principles anderlying the extension of our parser to produce ~S's as output. Because none of the fragments of ~T contains Raising-and Equi-verbs taking infinitival or that-complements we are confronted with the task of writing construction rules for such verbs. It will turn out, however, that it is not difficult to see how to extend ~T to eomprise such constructions. "ibis is due to the fact that using LFG as syntactic base for IRT -and not the categorial syntax of Kamp -the ~raveling of the thematic relations in a sentence is already accomplished in f-structure. Therefore it is streightfo~rd to formulate construction rules which give the correct readings for (i0) and (ii) of the previous section, establish the propositional equivalence of pairs with or without Raising, Equi (see (I), (2)), etc. (I) John persuaded Mary to come (2) John persuaded ~%~ry that she should come let us first describe the BRS construction rules by the f~niliar example (3) every man loves a woman Using Ksmp's categorial syntax, the construction rules operate top down the tree. The specification of the order in which the parts of the tree are to he treated is assumed to be given by the syntactic rules. I.e. the specification of scope order is directly determined by the syntactic construction of the sentence.We will deal with the point of scope ambiguities after baying described the ~y a BRS is constructed.Our description -operating bottom up instead top down -is different from the one given in [4] in order to come closer to the point we want to make. But note that this differei~ce is not ~l genuine one. ~hus according to the first requiranent of the previous section we assume that to each semantic from a semantic structure is associated. For the lexical entries of (3) we ~mve The picture should make clear the way we ~mnt to extend the parsing mechanism described in section 1 in order to produce ~S's as output ~ no more f-stroctures: instead of partially instantiated f-structures determined by the lexical entries partially instsntiated IRS's are passed eround the tree getting aocc~plished by unification. Toe control mechanism of LFG will automatically put the discourse referents into the correct argument position of the verb. lhus no additional work has to be done for the g~=~,~atical relations of a sentence. But what about the logical relations? Recall that each clause has a unique head end that the functional features of each phrase are identified with those of its head. For (3) the head of S -~> NPVP is the VP and the head of VP --> V NP is the V. %h~m the outstanding role of the verb to determine and restrict the grmmmtical'relations of the sentence is captured. (4) , however, shows that the logical relations of the sentence are mainly determined by its determiners, which are not ~eads of the NP-phrases and the NP~phrases thsmselves are not the heads of the VP-and S-phrase respectively.To account foc this dichotomy we will call the syntactically defined notion of head "grammatical head" and we will introduce a further notion of "logical head" of a phrase. Of course, in order to make the definition work it has to be elaborated in a way that garantses that the logical head of a phrase is uniquely determied too. Consider (~) John pe.rsuaded an american to win hwin(y) The fact that (7) does not neccesserily imply existence of ~m 8merlcan whereas (6) does is triggered by the difference between Equl-and R~dslng-verbe.Suppose we define the NP to he the logical hend of the phrase VP --> V NP VP I. ~ the logical relations of the VP would be those of the ~E ~. This amounts to incorporating the logical structures of the V and the VP ~ into the logical structure of the NP, which is for both (6) and 7and thus would lead to the readings represented in (6") and (7"). 0onsequentiy (7") ~mlld not he produced. Defining the logical head to be the VP | would exclude the r~a~.gs (6") and (7"').Evidently the last possibility of defining the logical head to be identical to the grammatical head, namely the V itself, seems to be the only solution. But this would block the construction already at the stage of unifying the NP-and VPhstructures with persuade(*,*,*) or expect(*,*). At first thought one easy way out of this dilemma is to associate with the lexical entry of the verb not the mere n-place predicate but a IRS containing this predicate as atomic condition, lhis makes the ~lification possible but gives us the following result: Of course ooe~is open to produce the set of ~S's representing (6) and (7). BUt this means that one has to work on (*)after having reached the top of the tree -a consequence that seems undesirable to us.the only way out is to consider the logical head as not being uniquely identified by the mere phrase structure configurations. As the above example for the phrase VP --> V NP VP ~ shows its head depends on the verb class too. But we will still go further. We claim that it [s possible to make the logical head to additionslly depend on the order of the surface string, on the use of active and passive voice and probably others. Ibis will give us a preference ordering of the scope ambiguities of sentences as the following: -Every man loves a Woman -A Woman is loved by every man -A ticket is bought by every man -Every man bought a ticket %he properties of ~lification granmers listed above show that the theoretical frsm~ork does not impose any restrictions on that plan.
null
null
Main paper: : O. ABSIRACr ~ne aim of this paper is to present parts of our system [2] , which is to construct a database out of a narrative natural la~ text. We think the parts are of interest in their o~. The paper consists of three sections: (I) We give a detailed description of the PROLOG implementation of the parser which is based on the theory of lexical functional grammar (I/V.). The parser covers the fragment described in [1, 94] .I.e., it is able to analyse constructions involving functional control and long distance dependencies. We will to show that -PROLOG provides an efficient tool for LFG-implementation: a phrase structure rule annotated with ftmctional schemata like~ M~ w~is ~^ be interpreted as, first, identifying the special grmr, m/tical relation of subject position of any sentence analyzed by this clause to he the h~ appearing in it, and second, as identifying all g~,~mtical relations of the sentence with those of the VP. This ~iversal interpretation of the ~tavariables ~ and & corresponds to the universal quantification of variables appearing in PROl/~uses. The procedural ssm~ntios of PROLOG is such that the instantietion of the ~ariables in a clause is inherited from the instantiation given by its subgoals, if they succeed. Thus there is no need for a separate component which solves the set of equations obtained by applying the I/G algorithm.-there is a canonical way of translati~ LFG into a PROLOG progz~,~. (II) For the se~ntic representation of texts we use the Discourse Representation q]neory developped by Psns [,a~p . At present the implerentation includes the fragment described in [4] .In addition it analyses different types of negation and certain equi-and raising-verbs. We postulate some requirenents a semantic representation has to fulfill in order to he able to analyse whole texts. We show how K~p's theory meets these requirements by analyzing sample disconrses involving amaphoric ~'s. (III) Finally we sketch how the parser formalism ca~ be augmented to yield as output discourse representation structures. To do this we introduce the new notion of 'logical head' in addition to the LFG notion of 'grmmmtical head'. reason is the wellknown fact that the logical structure of a sentence is induced by the determiners and not by the verb which on the other hand determines the thenatic structure of the sentence. However the verb is able to restrict quantifier scope anbiguities or to induce a preference ordering on the set of possible quantifier scope relations. ~-erefore there must he an interaction between the grammatical head and the logical head of a phrase.A main topic in AI research is the interaction between different components of a systen.But insights in this field are primarily reached by experience in constructing a complem system. Right frcm the beginning, however, one should choose formalisms which are suitable for a s~nple and transparent transportion of information. We think LFG meets this requirenent.The formalism exhibiting the analysis of a sentence c~ he expanded in a simple way to contain entries which are used during the parse of a whole text, for example discourse features like topic or domain dependent knowledge conming from a database associated with the lexicon. Since I/G is a kind of u~_ification grammar it allows for constructing patterns which enable the following sentences to refine or to change the content of these disc~irse features. Knowledge gathered by a preceding sentence can he used to lead the search in the lexicon by demanding that certain feature values match. In short we hope that the nearly tmiform status of the different description tools allows simple procedures for the expansion and mani~Llation by other components of the syst~n. But this was a look ahead.Let us mow come to the less a~bitious task of implementing the grmmmr of [i,~4] . iexical functional g~ (LFG) is a theory that extends phrase structure ~L~,mrs without using transformations. It ~nphasizes the role of the grammatical f~Ictions and of the lexicon. Another powerful formalism for describing natural languages follows from a method of expressing grammars in logic called definite clause gz~,srs (DOG). A DOG constitutes a PROIOG programne. We %~nt to show first, how LFG can he tr-amslated into DOG and second, that PROLOC provides an efficient tool for I/D-Implementation in that it allows for the construction of functional structures directly during the parsing process. I.e. it is not necessary to have seperate components which first derive a set of f~mctional equations from the parse tree and secondly generate an f-str~ture by solving these equations. Let us look at an example to see how the LFG machinery works. We take as the sample sentence "a w~man expects an anerican to win'. ql%e parsing of the sentence proceeds along the following lines.~ne phrase structure rules in (i) generate the phrase structure tree in (2) (without considering the schemata written beneath the rule elements).EQUATION..I. s ~_~ v PFET N a worn expects & ~me~'ioan to win the c-stru~ture will be annotated with the functional schemata associated with the rules . ~he schemata found in the lexical entries are attached to the leave nodes of the tree. ~his is shown in (3). Then the tree will he Indexed. ~e indices instantiate the upand down-arrows. An up-arrow refers to the node dominating the node the schema is attached to. A d~n-~ refers to the node which carries the f~ctlonal schema. Tns result of the instantiation process is a set of ftmctional equations.)= 4, 1 1 (*SPEC)=A (+NLM)=SG (~NU'O=SG (+Gm)=F~ (~PmS)=3 (~mZD)='~ndAN" V NP VP" l~r N VP 1 (~S~EC)=~ (4m0---SC (+NU~)=SG 4%~mS)=3 (+PRED)= ' ~RICAN" (¢ reED)=" E~ECT<(SUBJ) ( X~)>( OBJ)' (4 ~ENSE)=mES \ ~=~ V (~ suBJ ~M)=SG (÷mED)='Wn~(SUBJ)>' (~S[mJ ProS)=3 4+xcem su~J)=(osJ)We have listed part of them in 44). TOe solving of these equations yields the f~ctional str~zture in (5).ED "l,~l~/'r' ~ 3 NINSG reED "EX~ECT<(SU~) ( XCmP)> ( O~J)" A ~mED 'A~m~C~ NU~ SG~ )It is composed of grammtical ftmction naras, s~antic forms and feature symbols. The crucial elements of LFG (in contrast to transformational g~n.ar)are the grammticel functiens like SL~J, OBJ, XCCMP and so on. The fu%ctional structure is to he read as containing pointers frem the functio~ appearing in the semantic forms to the corresponding f-structures. The ~,,atical functions assumed by LFG are classified in subcategorizable (or governable) and nonm~*zategorizable functions. TOe subcategorizable ones are those to which lexlcal items can make reference.TOe item "expects' for e~smple subcategorizes three functions, but only the material inside the angled brackets list the predicate's smmntic arguments. X{I~P and XAIU are ~/~e only open grammtical functions, i.e. ,they can denote functionally controlled clauses.In our exhale this phenomena is lexically induced by the verb "expects'. Tnis is expressed by its last sch~mm "(%XC[~P SUBJ)=(@OBJ)". It has the effect that the 0]~of the sentence will becmme the SUBJ of the XC~MP, that me.%ns in our example it becomes the argument of d~e predicate 'win'. Note that the analysis of the sentence "a woman promises an ~merlcan to win" would differ in two respects. First the verb 'prcmlses' lists all the three ft~ctions subcategorized by it in its sem~ntlc argument structure. And second 'premises" differs from "expects' just in its f~mctional control schema, i.e., here we find the equation "(#X{~MP SUBJ)=(A~SLBJ) '' yielding an arrow from the SL~J of the XC~MP to the SUBJ of the sentence in the final f-structure. An f-structure must fulfill the following conditions in order to be a solution -uniqueness: every f-nane which has a value has a ~ique value -completeness:the f-structure must contain f-values for all the f-na~es subcategorized by its predicate -coherence: all the subcate~orizable fzmctions the f-structure contains must be ~tegorised by its predicate The ability of lexical irons to determine the features of other items is captured by the trivial equations. Toey propagate the feature set which is inserted by the lexical item up the tree. For e~mple the features of the verb become features of the VP end the features of the VP become features of S. The ~llqueness principle guarantees that any subject that the clause contains will have the features required by the verb. The trivial equation makes it also possible that a lexical item, here the verb, can induce a f~mctional control relationship he~ different f-structures of the sentence. An ~mportant constraint for all references to ftmctions and fonctional features is the principle of f~mctional locality: designators in lexical and grmm~tical schemata can specify no more than two iterated f~mction applications. Our claim is t|mt using DCG as a PROLOG programe the parsing process of a sentence according to the LFG-theory can be done more efficiently by doing all the three steps described above simultaneously. Why is especially PROLOG useful for doing this? In the a;motated e-structure of the LFG theory the content of the f~mctional equations is only '"~wn" by the node the equation is annotated to and by the immediately dominating node. The memory is so to speak locally restricted. Thus during the parse all those bits of info~tion have to be protocolled for so~e other nodes. This is done by means of the equations. In a PROIOG programme however the nodes turn into predicates with arEun*~ts.Tns arguments could be the same for different predicates within a clause. Therefore the memory is '~orizentall~' not restricted at all. Furthermore by sharing of variables the predicates which are goals ca~ give infon~tion to their subgoals.In short, once a phrase structure grammr has been translated into a PROIOG pragraune every node is potentially able to grasp information from any other node. Nonetheless the parser we get by embedding the restricted LFG formalism Into the highly flexible r~G formalism respects the constraints of Lexlcal ftmctlonal granular. Another important fact is that LFG tells the PROIOG programmer in an exact manner what information the purser needs at which node and just because this information is purely locally represented in the LFG formalism it leads to the possibility of translating 12G into a PROLOG programme in a ca~mical wey. We have said that in solving the equations LFG sticks together informations ¢mmiog from different nodes to build up the final output. To mirror this the following PROLOG feature is of greatest importance. For the construction of the wanted output during the parsing process structures can he built up piecsneal, leaving unspecified parts as variables. The construction of the output need not he strictly parallel to the application of the corresponding rules. Variables play the role of placeholders for structures which are found possibly later in the parsing process. A closer look at the verb entries as formulated by LFG reveals that the role of the f~mction names appearing there is to function as placeholders too. To summarize: By embedding the restricted LFG formalism into the hlgly flexible definite clause grammr fonmg/ismwemake llfe easier. Nonetheless the parser we get respects the constraints which are formulated by the LFG theory. Let us now consider some of the details. Xhe n~les under (i) are transformed into the PROLOG programme in (6). (* indicates the variables.) (6) S (*el0 *ell *outps) <--NP (*el0 *c12 *featnp *outpnp) VP (*c12 *ell (SIBJ (*outpnp *featnp)) T~ *outpa) VP (*clO *ell *outpsubj *featv *outps) <-v (*cent (~o~mmb/~) *leafy *outps) F~/~IP (*el0 *¢12 OBJ ~ *ill)Ifun£tional FA(~ (*¢12 *c13 OBJ2 ~ *~) controll FAf~=P (*el3 *el40BL ~ *~) FA¢~" (*¢14 *ell *oont xcem ~ nil) l i~iAst~ FAOJP' (*clO *ell (*gf *cont) *gf ) . *i0) *10) ~-VP" (*¢I0 *ell *cont *outpxcomp) NP (*el0 *ell *ontpnp) <- lET (*el0 *¢ii *ontpdet) N (*outpdet *outpnp)We use the content of the function assigning equations to build up parts of the whole f-structure during the parsing process. Crur~al for this is the fact dmt every phrase has a ~mique category, called its head, with the property that the functional features of each phrase are identified with those of its head. The head category of a phrase is characterized by d~e assignment of the trivial ft~%ctional-equation and by the property of being a major category, ql%e output of each procedure is constructed by the subprocedure corresponding to the head. ~ means that all information resulting from the other subprooedures is given to that goal. ll~is is done by the 'outp' variables in the programme.ThUS the V procedure builds up the f-structure of the sentence. Since VP is the head of the S rule the VP procedure has an argument variable for the SUB7 f-structure. Since V is the head of the VP rule this variable together with the structures coming fore the sister nodes are given to V for the construction of the final output.As a consequence our output does not contain pointers in contrast to Bresnan' s output. Rather the argument positions of the predicates are instantiated by the indicated f-stmmtures. For each category there is a fixed set of features, l~e head category is able to impose restrictions on a fixed subset of that feature set. This subset is placed on a prominent position, l~e corresponding feature values percolating up towmrds the head category will end up in the sate position d&~anding that their values agree. Tois is done by the ' feat" variables. The ~aiqueneas condition is trivially fulfilled since the passing around of parts of the f-structure is done by variables, and PROIOG instantiates a variable with at most one value.. (7) V ( (V(KEP (SL~J (*outpobj *featobj)))Ifenctional control] ((S[BJ (*outpsubj (SG 3))) ~ Icheck listl (OBJ (*outpobj *featobJ)) (XC~MP *outpxcomp)) +'-I output I ((TK~SE m~) (reED "EXPECt (*outpaubj *outpxcemp)')) ) ~he checking of the completeness and coherence condition is done by the Verb procedure. (7) shows the PROLOG assertion corresponding to the lexical entry for 'expects'.In every assertion for verbs there is a list containing the g~=m~,~tical ftmctions subcategorized by the verb. This is the second argument in (7), called "check list'. ~ list is passed around during the parse. ~lis is done by the list umderlined with waves in (6). Every subcategorlzable f~action appearing in the sentence must be able to shorten the llst. Tnis guarantees coherence.In the end the list must have diminished to NIL. This guarantees completene&s. As can be seen in (7) a by-product of this passing around the check list is to bring the values of the grammtical functions subcategorized by the verb down to the verb's predicate argument structure. To handle famctional control the verb entry contains an argument to encode the controller. Ibis is the first argument in (7). lhe procedure ~li.ch delivers XC~MP (here the VP" procedure) receives d~is variable (the underlined variable *cont in (6)) since verbs can induce ft~ctional control only upon the open grammtical famction XOCMP.For toug~ement constructions the s-prime procedure receives the controller variable too. But inside this clause the controller must be put onto the long distance controller list, since SCCMP is not an open grammatical function. That leads us to the long distance dependencies (8) The glrl wonders whose playmate's nurse the baby saw.S" Toe superscript S of the one controller indicates that the corresponding controlee has to be found in a S-rooted control domain whereas the [-kwh] controlee for the other controller has to be found beneath a ~ node. Finally the box around the S-node reeds to be explained. It indicates the fact that the node is a boLmding node. Kaplan/Bresnan state the following convention A node M helor~s to a control domain with root node R if and only if R dominates M and there are no bo~iding nodes on the path from M up to but not including R.--> NP .p [] (+Focns)=~ (10) / s NP /VP~ V S' ~,,~ ~ N NP VP \ i Y-k I IX / \ , .il~ .~_ N I IET N VThe girl wondered what the m~se asked who saw Long distance control is haldle by the programme using a long distance controller list, enriched at some special nodes with new oontrollers, passed down the tree and not allowed to go further at the bounding nodes. S ((*oL!t~np*f_eatnj~ !S_N~)) ~ *outpsc) Every time a controlne is found its subscript has to match the corresponding entry of the first menber of the controller list. If this happens the first element will be deleted from the list. The fact that a controlee can only match the first elenent reflects the crossed dependency constraint. *clO is the input controller variable of the S" procedure in (12). *cll is the output variable. *clO is expanded by the [4wh] controller within the NP subgoal. This controller must find its controllee during d~e e~ecution of the NP goal.Note that the output variable of the NP subgoal is identical with the output variable of the main goal and that the subgoal S" does have different controller lists. ~ reflects the effect of the box aroLmd the S-node, i.e. no controller coming do,retards can find its controlee inside the S-prncedure. l~e only controller going into the S goal is the one introduced below the NP node with dnmsln root S. Clearly the output variable of S has to be nil. There are rules which allow for certain controllers to pass a boxed node Bresna~Kaplan state for example the rule in (13).(13) s" --> (nhat) sThis rule has the effect that S-rooted contollers are allowed to pass the box. Here we use a test procedure which puts only the contollers iedexed by S onto the controller list going to the S goal. ~ereby we obtain the right treatment of sentence (14). (14) the girl wondered who John believed that Mary claimed that the baby saw . In a corres~eding manner the complex NP 'whose playmate's nurse" of sentence (8) is analysed.As senantic representation we use the D(iscourse) R(epresentation) T(heory) developped by Hans Yamp [4] . I.e. we do not adopt the semantic theory for L(exical) F(unctional) C~rammr) proposed by Per-Kristian Halverson [2] . Halverson translates the f~nctional structures of LFG into so-called semantic structures being of the same structural nature, namely scyclic graphs. The semlntin structures are the result of a translation procedure which is based on the association of formulas of intensional logic to the semantic forms appearing in the functional structure. The reason not to take this approach will be explained by postulating some requirements a se~anclc representation has to fulfill in order to account for a processing of texts. Tnen we will show that these requlr~ents are rP~I]y necessary by analysing some sample sente,ces and discourses.It will turn out that ~T accoante for them in an intuitively fully satisfactory ~y. Because we cannot review [RT in detail here the reader should consult one of the papers explaining the ftmdanentals of the theory (e.g. [~] ), or he should first look at the last paragraph in which an outline is given of how our parser is to be extended in order to yield an IRS-typed output -instead of the 'traditional' (semantic) flmctional structures. The basic building principle of a semantic representation is to associate with every signlfic2mt lexical entry (i.e., every entry which does contribute to the truthcondldtlonsl aspect of the meaning of a sentence) a semantic structure. Compositional principles, then, will construct the semantic representation of a sentence by combining these se~antlc structures according to their syntactic relations. The desired underlying principle is that the smmntlc structures associated with the semantic forms should not be. changed during the composition process. To vat it dif6erently:one ~nts the association of the semantic structures to be independent of the syntactic context in which the semantic form appears. This requirement leads to difficulties in the tradition of translating sentences into formulas of e.g. predicate or intentional logic. Consider sentences (I) If Johe admires a woman then he kisses her and (2) Every man who a~ires a woman kisses her the truth conditions of which are determined by the first order fommlas respectively. ~le problem is that the definite description "a woman" reemerges as universally quantified in the logical representation-and there is no way out, because the prono~m "she" has to be boLmd to the wommn in question. I~T provides a general acco~mt of the meaning of indefinite descriptions, conditionals, tmiversally quantified noun phrases and anaphoric pronoun, s.t. our first requirement is satisfied. 1~e semantic represEmtations (called nRs's) which are assigned to sentences in which such constructions jointly appear have the truth conditions which our intuitions attribute to them. The second reas~ why we decided to use I~R as semantic formalism for LFG is that the constraction principles for a sentence S(i) of a text D = S(1), .... S(n) are fozmulated with respect to the semantic representation of the prec~Ing text S(1),... ,S(i-l).1~erefore the theory can accotmt for intersentential semantic relationships in the same way as for intrasentential ones. ~ is the second requirement: a s~antic representation has to represent the discourse as a whole and not as the mere union of the s~antic representations of its isolated sentences. A third requirenent a senantlc representation has to fulfill is the reflection of configurational restrictions on anaphoric links: If one embeds sentence (2) into a conditional (6) *If every man who admires a woman kisses her then she is stressed the anaphoric link in (2) is preserved.But (6) does -for configurational reasons -not allow for an anaphoric relation between the "she" and "a woman". The same happens intersententially as shown by (7) If Jo~m admires a woman tl~n he kisses her. *She is enraged. A last requirement we will stipulate here is the following: It is neccessary to draw inferences already during the construction of the semantic representation of a sentence S(i) of the discourse.The inferences must operate on the semantic representation of the already analyzed discourse S(1),... ,S(i-l) as well as on a database containing the knowledge the text talks about. ~ requirement is of major importance for the analysis of definite descriptions. Consider (8) Pedro is a farmer. If a woman loves him then he is happy. Mary loves Pedro. The happy farmer marries her in which the definite description "the happy farme•' is used to refer to refer to the individual denoted by "Pedro". In order to get this llnk one has to infer that Pedro is indeed a happy farmer and that he is the only ore. If this were not the case the use of the definite description would not he appropriate. Such a deduction mechanism is also needed to analyse sentence (9) John bought a car. the engine has 160 horse powers In this case one has to take into account some ~nowledge of the world, nanely the fact that every car has exactly one engine. To illustrate the ~y the s~mmtic representation has to be interpreted let us have a brief look at the text-~RS for the sample discourse (8)[ Pedrou v love(v,u) I leve(y,u) I~u,v)ThUS a IRS K consists of (i) a set of discourse referents: discourse individuals, discourse events, discourse propositions, etc. (il) a set of conditions of the following types -atomic conditions, i.e. n-ary relations over discourse referents -complex conditions, i.e. n-ary relations (e.g.--> or :) over sub-~S's and discourse referents (e.g. K(1) --> K(2) or p:K, where p is a discourse proposition) A whole ~S can be tmderstoed as partial model representing the individuals introduced by the discourse as well as the facts and rules those individuals are subject to. The truth conditions state that a IRS K is true in a model M if there is a proper imbedding from K Into M. Proper embedding is defined as a f~mction f from the set of discourse referents of K in to M s.t. (i) it is a homomorphism for the atomic conditions of the IRS and (il) -for the c~se of a complex condition K(1) --> I((2) every proper embedding of K(1) that extends f is extendable to a proper embedding of K(2).for the case of a complex condition p:K the modelthenretlc object correlated with p (i.e. a proposition if p is a discourse proposition, an event if p is a discourse event, etc.) must be such that it allows for a proper embedding of K in it. Note that the definition of proper embedding has to be made more precise in order to adapt it to the special s~nantica one uses for propositional attitudes. We cannot go into details bare. Nonet/~lese the truth condition as it stands should make clear the following: whether a discourse referent introduced implies existence or not depends on its position in the hierarchy of the IRS's. C/ven a nRS which is true in M then eactly those referents introduced in the very toplevel [RS imply existence; all others are to he interpreted as ~iversally quantified, if they occur in an antecedent IRS, or as existentially quantified if they occur in a consequent BRS, or as having opaque status if they occur in a ~S specified by e.g. a discourse proposition. Tnus the role of the hierarchical order of the BRS's is to build a base for the definition of truth conditions. But furthemnore the hierarchy defines an accessibility relation, which restricts the set of possible antecedents of anaphorie NP's.Ibis aceessibiltity relation is (for the fra~nent in [~]) defined as follows: For a given sub-ERS K0 all referents occurring in NO or in any of the n~S's in which NO is embedded are accessible. Furthermore if NO is a consequent-~S then the referents occurring in its corresponding antecedent I]~S on the left are accessible too. This gives us a correct trea~aent for (6) and (7). For the time being -we have no algorithm which restricts and orders the set of possible anaphorie antecedents ~-*-ording to contextual conditions as given by e.g. (5) John is reading a book on syntax and Bill is reading a book on s~-oatics o a paperback J Therefore our selection set is restricted only by the accessibility relation and the descriptive content of the anaphoric NP" s. Of course for a~apheric pronouns this content is reduced to a minimum, namely the grm~rstical features associated to them by the lexical entries. This accounts e.g. for the difference in acceptability of (I0) and (II). (I0) Mary persuaded every man to shave |dmself (II) *~4ary promised every man to shave himself The ~S's for (i0) and (II) show that beth discourse referents, the one for '~r~' and the one for a '~an", are accessible from the position at which the reflexive prex~an has to be resolved. But if the '~dmselP' of (ii) is replaced by x it cannot he identified with y having the (not explicitely shown) feature female.Ii0")I Y *~')/ / mary = y / ipers~de(y~,p)l / ~ prom~(y~,p)Definite dese~tue of the semantic content of their co,mon-noun-phrases and the existence and ~niqeeness conditions presupposed by th~n. "~erefore in order to analyse definite descriptions we look for a discourse referent introduced in the preceding IRS for which the description holds and we have to check whether this descrition holds for one referent only. Our algorithm proceeds as follows: First we build up a small IRS NO encoding the descriptive content of the common-no~-phrase of the definite description together with its ~miqlmess and existency condition:El): x farmer(x) happy(x) Y I L happy(y) _],%econd we have to show that we can prove I<0 out of the text-nRS of the preceeding discourse , with the restriction that only accessible referents are taken into account. The instantiation of *x by this proof gives us the correct anteoedent the definite description refers to. Now we forget about NO and replace the antecedent discourse referent for the definite noun phrase to get the whole text-IRS (8'). Of course it is possible that the presuppositions are not mentioned explicitely in the discourse but follow implicitely from the text alone or from the text together with the knowledge of the domain it talks about. So in cases like (9) John bought a car. The engine has 260 horse powers Pere the identified referent is functionally related to referents that are more directly accessible, nmne_ly to John's car. Furthermore such a functional dependency confers to a definite description the power of introducing a new discourse referent, nanely the engine which is functionally determined by the car of which it is part. ~ shifts the task from the search for a direct antecedent for "the engine" to the search for the referent it is f%mctionelly related to. But the basic mechanism for finding this referent is the same deductive mechanism just outlined for the '~lappy farme~" example.~ "GRAMMATICAL PARSIAK~' AND "lOGICAL P~RSIN~' In this section we will outline the principles anderlying the extension of our parser to produce ~S's as output. Because none of the fragments of ~T contains Raising-and Equi-verbs taking infinitival or that-complements we are confronted with the task of writing construction rules for such verbs. It will turn out, however, that it is not difficult to see how to extend ~T to eomprise such constructions. "ibis is due to the fact that using LFG as syntactic base for IRT -and not the categorial syntax of Kamp -the ~raveling of the thematic relations in a sentence is already accomplished in f-structure. Therefore it is streightfo~rd to formulate construction rules which give the correct readings for (i0) and (ii) of the previous section, establish the propositional equivalence of pairs with or without Raising, Equi (see (I), (2)), etc. (I) John persuaded Mary to come (2) John persuaded ~%~ry that she should come let us first describe the BRS construction rules by the f~niliar example (3) every man loves a woman Using Ksmp's categorial syntax, the construction rules operate top down the tree. The specification of the order in which the parts of the tree are to he treated is assumed to be given by the syntactic rules. I.e. the specification of scope order is directly determined by the syntactic construction of the sentence.We will deal with the point of scope ambiguities after baying described the ~y a BRS is constructed.Our description -operating bottom up instead top down -is different from the one given in [4] in order to come closer to the point we want to make. But note that this differei~ce is not ~l genuine one. ~hus according to the first requiranent of the previous section we assume that to each semantic from a semantic structure is associated. For the lexical entries of (3) we ~mve The picture should make clear the way we ~mnt to extend the parsing mechanism described in section 1 in order to produce ~S's as output ~ no more f-stroctures: instead of partially instantiated f-structures determined by the lexical entries partially instsntiated IRS's are passed eround the tree getting aocc~plished by unification. Toe control mechanism of LFG will automatically put the discourse referents into the correct argument position of the verb. lhus no additional work has to be done for the g~=~,~atical relations of a sentence. But what about the logical relations? Recall that each clause has a unique head end that the functional features of each phrase are identified with those of its head. For (3) the head of S -~> NPVP is the VP and the head of VP --> V NP is the V. %h~m the outstanding role of the verb to determine and restrict the grmmmtical'relations of the sentence is captured. (4) , however, shows that the logical relations of the sentence are mainly determined by its determiners, which are not ~eads of the NP-phrases and the NP~phrases thsmselves are not the heads of the VP-and S-phrase respectively.To account foc this dichotomy we will call the syntactically defined notion of head "grammatical head" and we will introduce a further notion of "logical head" of a phrase. Of course, in order to make the definition work it has to be elaborated in a way that garantses that the logical head of a phrase is uniquely determied too. Consider (~) John pe.rsuaded an american to win hwin(y) The fact that (7) does not neccesserily imply existence of ~m 8merlcan whereas (6) does is triggered by the difference between Equl-and R~dslng-verbe.Suppose we define the NP to he the logical hend of the phrase VP --> V NP VP I. ~ the logical relations of the VP would be those of the ~E ~. This amounts to incorporating the logical structures of the V and the VP ~ into the logical structure of the NP, which is for both (6) and 7and thus would lead to the readings represented in (6") and (7"). 0onsequentiy (7") ~mlld not he produced. Defining the logical head to be the VP | would exclude the r~a~.gs (6") and (7"').Evidently the last possibility of defining the logical head to be identical to the grammatical head, namely the V itself, seems to be the only solution. But this would block the construction already at the stage of unifying the NP-and VPhstructures with persuade(*,*,*) or expect(*,*). At first thought one easy way out of this dilemma is to associate with the lexical entry of the verb not the mere n-place predicate but a IRS containing this predicate as atomic condition, lhis makes the ~lification possible but gives us the following result: Of course ooe~is open to produce the set of ~S's representing (6) and (7). BUt this means that one has to work on (*)after having reached the top of the tree -a consequence that seems undesirable to us.the only way out is to consider the logical head as not being uniquely identified by the mere phrase structure configurations. As the above example for the phrase VP --> V NP VP ~ shows its head depends on the verb class too. But we will still go further. We claim that it [s possible to make the logical head to additionslly depend on the order of the surface string, on the use of active and passive voice and probably others. Ibis will give us a preference ordering of the scope ambiguities of sentences as the following: -Every man loves a Woman -A Woman is loved by every man -A ticket is bought by every man -Every man bought a ticket %he properties of ~lification granmers listed above show that the theoretical frsm~ork does not impose any restrictions on that plan. Appendix:
null
null
null
null
{ "paperhash": [ "frey|automatic_construction_of_a_knowledge_base_by_analysing_texts_in_natural_language" ], "title": [ "Automatic Construction of a Knowledge Base by Analysing Texts in Natural Language" ], "abstract": [ "We present a system which translates sentences from a subset of German into a database. This data-base will function as the basis for a question-answering-systern. \n \nThe system is applied to a complete text and not to isolated sentences. As an intermediate stage between the German text and the database we use the Discourse Representation Structures (DRS) invented by Hans Kamp. Karnp's system has been chosen because it handles intrasentential and intersentential relations uniformly. Within Kamp's system one can account for certain types of anaphoric relations for which no other linguistic theory has provided a solution. \n \nThe input to our system is analysed by a parser which is based on lexical functional grammar. This is the first attempt to combine research on discourse representation with lexical functional grammar with the help of the formalism of Definite Clause Grammar. \n \nFor the construction of the database out of the DRS's, two solutions arc proposed. First, a translation of the DRS's into a set of PROLOG clauses enriched with some additional deductive principles. Second, the formulation of inference rules which operate directly on the DRS. \n \nSo far we have implemented the following components: parser of German, translation rules which map syntactic trees into DRS's and rules which translate DRS's into PROLOG-clauses." ], "authors": [ { "name": [ "W. Frey", "Uwe Reyle", "C. Rohrer" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "10290074" ], "intents": [ [ "background", "methodology" ] ], "isInfluential": [ false ] }
null
497
0.036217
null
null
null
null
null
null
null
null
bc48f570810986c3b98ad57edf7aacda3aa12e1a
788527
null
Extended Access to the Left Context in an {ATN} Parser
Some Italian sentences related to linguistic phenomena largely known and recently discussed by many computational linguists are discussed in the framework of AT:~. They offer certain difficulties which seem to suggest a substantial revision of the ATN formalism. The theoretical assumptions and
{ "name": [ "Prodanof, Irina and", "Ferrari, Giacomo" ], "affiliation": [ null, null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
26
1
null
null
null
A storage overload may also be caused by the need for raising lexieal features. ~orphological features are necessary at the least in the test of subject-verb agreement. This is done by LIFTRIng in ad hoe registers gender and number from the NP level to the S level.If the :;P is popped in a possible subject position the test for agreement may take place by comparing the content of those registers with the corresponding features of the verb. However, there are cases such as ex.(1) in which such information must be used again in the course of the analysis for another (agreement) test.Those features must be~ therefore, copied in appropriately labelled registers in order to i) flag their relation to the subject and ii) prevent theln from being erased when the s~ne features are LIFTRed from the following NP. as the ~:P "i suoi colle~;hi" is analysed, it replaces the SENDRed "Giovanni" in the SUBJ register and the correct interpretation is popped. ?recessing< the last corn ple~,~ent three re:~isters contaizling the three possible subjects will be available and shall be visited in order to find the ri~iht one(3).If we look at the discussed exa,,;ples free an entirely functional viewpoint ~Je can describe them as having in common the need for retrieving(2) *John was sure that his enne~,ies would have disclosed to the press that his wife had once told tha't (he) ha.:! bested her*. ~,'e give here, for clarity, the parenthesized form of this exmuple: (Giovanni era sicuro (che i suol nemlei avrebbero rlvelato alia stamps (ehe sua moglie aveva detto un glorno (che l*aveva picchiata))). I!otice that in this example the subject-verb agreement is sufficient to select the right antecedent, but t|'is is not always the case.(3) A possible alternative, equally trlcky,is the use of the HOLD-VIRT couple. information somewhere back in the already built structures; the tricky solutions presented above are, in fact, a way of accessing parts of the left context. These sometimes correspond to the entire content of a register and sometimes to a fragment of it.We will assume, then, that the left context is stored in a space of memory, equally accessible from any level and that retrieving always concerns fragments of it. At any point of the process this structure contains the current hypothesis about the analysis of the parsed segment of the input from the beginning; hence we will refer to it as Current Global Hypothesis (CGH).The retrieving action will have two participants, a symbol that triggers the action (trig£er) and the infomnation to be retrieved (the target of the action).In this frame all the different procedures discussed above may be reduced to a single general algorithm of three steps, i) identification of a trl~er (a gap to be filled, a verb uhich demands for the subject-verb agreement test) ii) extraction of constraints which ::~ust guide the search for the target, and iii) retrieving of the required inforuation. In an AT!~ these cases ~:eet the initial set of arcs ~hich recognize a PP e~,bedded iu an NP, as iu (14) thus the value of SUI;J=:IEAD is "he'.The functions that access the data structure are specifically desi.~;ned to treat this type of representation but ~.,e think that they could be easily Feneralizcd.The ter-n "component" will be used to identify the get of paths startin;; fro,~ the sa:ae lal;el (radix). At any point in which non-determinism is called, the previous context, in particular the data, is saved and only the new values are set in the current context.Therefore, there is no difference between the use of the traditional register table and this special list since both of them are handled in the same way. This (LIFO) list contains at any point of the process the CGI', i.e. the entire left context literally represented in ten,~s of attribute-value pairs. We give hereafter a llst in Backus notation of the functions which access the CCH. They can search either only the current level (CL) or throu;,h the entire list (T). In this latter case the current level is excluded and, if no further options are specified, the lower (the nearest to the top) occurrence is returned. Another option (dtype) returns all the occurrences either appen,}ed in a list (L) or one "y one, non-deter:,inistieally (UD). ,', third optio1~ evaluates conditions in order to select the cn;~pohent i~entified by the specified path. The three last actions, PUSI!, POP, and I:~S]:'.P,T, manipulate the items in the list. PUSX adds a nee (empty) ite:,t in front of the list. The elements of the co~ponent being analysed (phrases or sentences) are ADDed it~ this top item, which has been therefore referred to as current level. POP re.coves the current top-ite+~ and e.:beds it into the ne~¢ top-item, possibly ~ssidning a label to the corresL;.onding co;aponent. Finally li!Si2~T inserts an itei,, correspondin to ~: nu:: level, so+mubere back between "ite+a" an:! the front part of the list, and fills it ~ith "data'. List ~anipulation takes place independently from the starting or the ending of the process expressed in a subnet.Thus a eo+aponent can be POPed after the end of its recognition procedure, wben also its function is clarified. The are recognizing an object, for ex., can be expressed as follows (START NP T (COND (FI::I?+ (SITBj) T CL T) (POP OBJ)) (TO qi)) which means that if there already is a subject, the current couponent must be popped with the label OBJ.The use of the IESERT function is primarily motivated by the treatment of certain relative clauses.Felative pronouns arc surface sijnals that tridger the embedding into ~, relativ~ clau~e of tim currently processed co.+q,oncnt(s). In the sentenc¢~ (17)11 libro della tra;,,a del quale i,arlava:to ['he book about The plot of whici ve tal,:e,1such an e,:~bedding take~ ',lace L.,:c~iatel~.' ~ft~r "libro', thus i'.roduciu< (4) An "anapk:~ric" facility is a~Iso i.:plc~Lented not to repeat an er,:beddcd fo~'m with the s0+::e ar:.u:.cnt as the e.ahcdcJin,, one. The general rule may be formulated as follows: "a new level labelled ~ELATIVECb\USE is to be inserted immediately after the antecedent of the relative prottoun'. Analysis of (17) will therefore proceed as follows; • -when the relative pronoun "quale" is encountered, the for;n (FIND (HEAD) (AND (Et] (FINDVAL (HEAD GEN) T T ~:D) (FINDVAL (DET GEN) T CL T)) (EQ (FINDVAL (HEAD N~) r T ND) (FINDVAL (bET N~) r CL T))) T T) no substantial difference exists in comparison with the traditional register access.In the discussed complex cases the access to the CGII is a known function of the length of the list, i.e. of the depth of embedding of tlle current level. Within any item search proceeds linearly as for any ordinary pattern-matching.The only substantially ne~ fact is the possibility of embedding the current component; this eliminates the need for backtracking, at least for some sentences.In conclusion, it seems that if there is a difference from the traditional ATN it is in favour of the version presented here.returns the lower head ~hich agrees in number and gender with the determiner of "quale" ('quale" is both masculine and feminine), i.e. "llbro'. This is the antecedent. The set of actions and forr~s presented seem to provide a functional descril;tion of many linguistic pheno~nena. They can be regarded as linguistic (procedural) generalizations, at least on the functional y~round. This supports our claim that linguistic pheno~:~ena can be described, in~lepen~ently fro~ tbc fon;~alisu that expresses them ( the grammar), in ter~.is of general operations.This set of operations is open-ended and can, therefore, be increased with functions designed for the treatment of new phenomena, as they are discussed and described. Furthermore, those actions can be taken to represent nlental operations of the language user, thus providing a valuable frame for psycLolinguistic experiments.ADVA~:TAGESThe parser we have been presenting is based on the core algorithm of the AT~J. Our modifications affect the set of for, us and actions and the data structure. The parsin~ algorithu~, therefore, keeps the efficiency of traditional ATE. We have already shown that the storing of the data structure does not present any special difference from the traditional re:~isters syste~, even in relation to the treatL~ent of non-determini~l.The r,:emory load is. therefore, strictly a function of tile length of the parsed se:_,L::ent of the input an(] no overhead determined by t~anipulations of structures is added as in the case of transit registers.The actiol~s an,! fom~s are equivalent to the tra~?itional ones, but for the fact that [~ost of tile.., :Lust visit the t~holc left context for every access. ~.~y:;ay this effec~ hardly l,alances the s~tting of transit re~,+isters. In fact, it is ~;or th noting: that in the ~lajority of comrlon sentences such accesses are very reduced, go that It is obvious that this view strongly inclines towards the idea of parser as a collection of heuristic strategies and processes and also offers a aye,|metric alternative to the HOLD hypothesis. According to thls hypothesis there are points in a sentence in which comprehension needs a heavier memory load; instead in our view an overhead of operations is suggested.Anyway the distinguished phenomena coincide, thus keeping the inte~rity of the experimental data(6).Our hypothesis seems more natural in t~.,o ~Jays. It embeds into a non-detemninistic frame so~+e operations very similar to some of those designed and discussed in the deterlnini.'~tlc hypotilesis [3, 4, 15, 16, 19] . The result is a strong limitation of the effects of non-determinism, at least for those cases they are desigue~t to treat. Then, a model such as the non-detem~inistic one, in which there is place for the study of human heuristic constraints, seems more attractive and natural.Our hypothesis seems intuitively natural also in so much as it tries to propose a "theory of guess ~. During the comprehension of a sentence guesses (CGII's) are progressively enriched and stored in a space of memory. During this process errors may he done.For some of them it is enough to ,aodify the previous guess while for others a real backtracking and reanalysis is necessary. Although the distinction between the two types of errors is unclear, it provides a valuable frame for further research in the domain of computational linguistics as well as psycholinguisties.In [>articular it seems to distim3uish in the activity of sentence comprehension a phase of structuring from a phase of perception. Errors occurring in t~,e former are remedied by ~nodifyin~ a guess, while those occurring in the latter need baektrackin~ and the choice of another strategy.A more serious systematization of the proposed functions, as well as the extension of the model to ~ore and more llnguistic phenomena are obvious extensions of the present project.Another direction where investigation seems to be particularly fruitfull is the relation between syntax and ser.:antics. On one hand, the fact that the result of the analysis is progressively stored in a unique space of uemory :lo not it:pose special constraints on the structure of the analyzed strin~. On the other hand, many of the presented functions include parameter slots for conditions which may be filled with any kind of test. This t~odel see:qs, therefore, to avoid "physiological" bounderies between syntax and semantics. The stored structure can be a semantic one and the tests can also incorporate se~;.antic descriptions. This seems to eventually lead to an easier integration of the two levels, h~e will present shortly [i0] a first approxi~.ation to a frame into which such in inte~ration can be realized.
Certain types of sentences seem to defy the abilities of several parsers, and some of them are being now discussed by many computational linguists, mostly within the deterministic hypothesis.of their treatment within the traditional AT[: paradigm seems to suggest that the real discussion is about how to acces the left context and what its form should be. the content of a register (GETR) or transfer it to another register (combination of SETR with a GETR). This last operation is equivalent to i) the renaming of a register, if the source register is successively set to a different value, ii) the initialization of a register at a lower or higher level, if SENDR or LIFTR are used.Initialization is co~aonly used to i) raise lexical features to a higher level where they are used for tests (ex.: subject-verb agreement), ii) pass possible antecedents to lower levels where a gap may be detected in an embedded clause.The antecedent passing theoretically unlimited increase By the standard procedure, the m~biguous sentence(1) may cause a in storage load. analysis of the (I) Giovanni disse che aveva mentito John said that (he) had lied A. ATN Grammars "Giovanni" is always SENDRed as possible SUBJect of a complement, as soon as "disse" is recognized An ATN grammar is a set of networks formed by as an STF~ENS verb. As no subject NP is met after labelled states and directed arcs connecting them."che', an interpretation is yielded with The arcs can rlco~nize terminal (words) and "Giovanni" in subject position. The second non-temnlnal (lexical cateF, ories) s~anbols or interpretation is produced si,,ply by successively reeursively call for a network identified by the setting the SULqJ register to a d~;~my node, which label of an initial state. When such a call (i) The ambiguity of this sentence is the sa1:~e as its English translation where "he" can be bouud either to "John" or to soueone else ,~eutioned in a previous sentence. Italian has a gap instead of a pronoun. remains unfilled.The same treatment is recursively applied to sentences llke(2) Giovanni pensava che avrebbe raccontato John thought that (he) would have told a tutti che aveva fatto una to everybody that (he) had done a scoperta discovery where "Giovanni" must serve as subject of both the first and the second (linearly) complement.(3) Giovanni dlsse che i suol colleghl avevano John said that his colleagues had mentito lied
null
Main paper: lexieal features raising: A storage overload may also be caused by the need for raising lexieal features. ~orphological features are necessary at the least in the test of subject-verb agreement. This is done by LIFTRIng in ad hoe registers gender and number from the NP level to the S level.If the :;P is popped in a possible subject position the test for agreement may take place by comparing the content of those registers with the corresponding features of the verb. However, there are cases such as ex.(1) in which such information must be used again in the course of the analysis for another (agreement) test.Those features must be~ therefore, copied in appropriately labelled registers in order to i) flag their relation to the subject and ii) prevent theln from being erased when the s~ne features are LIFTRed from the following NP. as the ~:P "i suoi colle~;hi" is analysed, it replaces the SENDRed "Giovanni" in the SUBJ register and the correct interpretation is popped. ?recessing< the last corn ple~,~ent three re:~isters contaizling the three possible subjects will be available and shall be visited in order to find the ri~iht one(3).If we look at the discussed exa,,;ples free an entirely functional viewpoint ~Je can describe them as having in common the need for retrieving(2) *John was sure that his enne~,ies would have disclosed to the press that his wife had once told tha't (he) ha.:! bested her*. ~,'e give here, for clarity, the parenthesized form of this exmuple: (Giovanni era sicuro (che i suol nemlei avrebbero rlvelato alia stamps (ehe sua moglie aveva detto un glorno (che l*aveva picchiata))). I!otice that in this example the subject-verb agreement is sufficient to select the right antecedent, but t|'is is not always the case.(3) A possible alternative, equally trlcky,is the use of the HOLD-VIRT couple. information somewhere back in the already built structures; the tricky solutions presented above are, in fact, a way of accessing parts of the left context. These sometimes correspond to the entire content of a register and sometimes to a fragment of it.We will assume, then, that the left context is stored in a space of memory, equally accessible from any level and that retrieving always concerns fragments of it. At any point of the process this structure contains the current hypothesis about the analysis of the parsed segment of the input from the beginning; hence we will refer to it as Current Global Hypothesis (CGH).The retrieving action will have two participants, a symbol that triggers the action (trig£er) and the infomnation to be retrieved (the target of the action).In this frame all the different procedures discussed above may be reduced to a single general algorithm of three steps, i) identification of a trl~er (a gap to be filled, a verb uhich demands for the subject-verb agreement test) ii) extraction of constraints which ::~ust guide the search for the target, and iii) retrieving of the required inforuation. In an AT!~ these cases ~:eet the initial set of arcs ~hich recognize a PP e~,bedded iu an NP, as iu (14) thus the value of SUI;J=:IEAD is "he'.The functions that access the data structure are specifically desi.~;ned to treat this type of representation but ~.,e think that they could be easily Feneralizcd.The ter-n "component" will be used to identify the get of paths startin;; fro,~ the sa:ae lal;el (radix). At any point in which non-determinism is called, the previous context, in particular the data, is saved and only the new values are set in the current context.Therefore, there is no difference between the use of the traditional register table and this special list since both of them are handled in the same way. This (LIFO) list contains at any point of the process the CGI', i.e. the entire left context literally represented in ten,~s of attribute-value pairs. We give hereafter a llst in Backus notation of the functions which access the CCH. They can search either only the current level (CL) or throu;,h the entire list (T). In this latter case the current level is excluded and, if no further options are specified, the lower (the nearest to the top) occurrence is returned. Another option (dtype) returns all the occurrences either appen,}ed in a list (L) or one "y one, non-deter:,inistieally (UD). ,', third optio1~ evaluates conditions in order to select the cn;~pohent i~entified by the specified path. The three last actions, PUSI!, POP, and I:~S]:'.P,T, manipulate the items in the list. PUSX adds a nee (empty) ite:,t in front of the list. The elements of the co~ponent being analysed (phrases or sentences) are ADDed it~ this top item, which has been therefore referred to as current level. POP re.coves the current top-ite+~ and e.:beds it into the ne~¢ top-item, possibly ~ssidning a label to the corresL;.onding co;aponent. Finally li!Si2~T inserts an itei,, correspondin to ~: nu:: level, so+mubere back between "ite+a" an:! the front part of the list, and fills it ~ith "data'. List ~anipulation takes place independently from the starting or the ending of the process expressed in a subnet.Thus a eo+aponent can be POPed after the end of its recognition procedure, wben also its function is clarified. The are recognizing an object, for ex., can be expressed as follows (START NP T (COND (FI::I?+ (SITBj) T CL T) (POP OBJ)) (TO qi)) which means that if there already is a subject, the current couponent must be popped with the label OBJ.The use of the IESERT function is primarily motivated by the treatment of certain relative clauses.Felative pronouns arc surface sijnals that tridger the embedding into ~, relativ~ clau~e of tim currently processed co.+q,oncnt(s). In the sentenc¢~ (17)11 libro della tra;,,a del quale i,arlava:to ['he book about The plot of whici ve tal,:e,1such an e,:~bedding take~ ',lace L.,:c~iatel~.' ~ft~r "libro', thus i'.roduciu< (4) An "anapk:~ric" facility is a~Iso i.:plc~Lented not to repeat an er,:beddcd fo~'m with the s0+::e ar:.u:.cnt as the e.ahcdcJin,, one. The general rule may be formulated as follows: "a new level labelled ~ELATIVECb\USE is to be inserted immediately after the antecedent of the relative prottoun'. Analysis of (17) will therefore proceed as follows; • -when the relative pronoun "quale" is encountered, the for;n (FIND (HEAD) (AND (Et] (FINDVAL (HEAD GEN) T T ~:D) (FINDVAL (DET GEN) T CL T)) (EQ (FINDVAL (HEAD N~) r T ND) (FINDVAL (bET N~) r CL T))) T T) no substantial difference exists in comparison with the traditional register access.In the discussed complex cases the access to the CGII is a known function of the length of the list, i.e. of the depth of embedding of tlle current level. Within any item search proceeds linearly as for any ordinary pattern-matching.The only substantially ne~ fact is the possibility of embedding the current component; this eliminates the need for backtracking, at least for some sentences.In conclusion, it seems that if there is a difference from the traditional ATN it is in favour of the version presented here.returns the lower head ~hich agrees in number and gender with the determiner of "quale" ('quale" is both masculine and feminine), i.e. "llbro'. This is the antecedent. The set of actions and forr~s presented seem to provide a functional descril;tion of many linguistic pheno~nena. They can be regarded as linguistic (procedural) generalizations, at least on the functional y~round. This supports our claim that linguistic pheno~:~ena can be described, in~lepen~ently fro~ tbc fon;~alisu that expresses them ( the grammar), in ter~.is of general operations.This set of operations is open-ended and can, therefore, be increased with functions designed for the treatment of new phenomena, as they are discussed and described. Furthermore, those actions can be taken to represent nlental operations of the language user, thus providing a valuable frame for psycLolinguistic experiments.ADVA~:TAGESThe parser we have been presenting is based on the core algorithm of the AT~J. Our modifications affect the set of for, us and actions and the data structure. The parsin~ algorithu~, therefore, keeps the efficiency of traditional ATE. We have already shown that the storing of the data structure does not present any special difference from the traditional re:~isters syste~, even in relation to the treatL~ent of non-determini~l.The r,:emory load is. therefore, strictly a function of tile length of the parsed se:_,L::ent of the input an(] no overhead determined by t~anipulations of structures is added as in the case of transit registers.The actiol~s an,! fom~s are equivalent to the tra~?itional ones, but for the fact that [~ost of tile.., :Lust visit the t~holc left context for every access. ~.~y:;ay this effec~ hardly l,alances the s~tting of transit re~,+isters. In fact, it is ~;or th noting: that in the ~lajority of comrlon sentences such accesses are very reduced, go that It is obvious that this view strongly inclines towards the idea of parser as a collection of heuristic strategies and processes and also offers a aye,|metric alternative to the HOLD hypothesis. According to thls hypothesis there are points in a sentence in which comprehension needs a heavier memory load; instead in our view an overhead of operations is suggested.Anyway the distinguished phenomena coincide, thus keeping the inte~rity of the experimental data(6).Our hypothesis seems more natural in t~.,o ~Jays. It embeds into a non-detemninistic frame so~+e operations very similar to some of those designed and discussed in the deterlnini.'~tlc hypotilesis [3, 4, 15, 16, 19] . The result is a strong limitation of the effects of non-determinism, at least for those cases they are desigue~t to treat. Then, a model such as the non-detem~inistic one, in which there is place for the study of human heuristic constraints, seems more attractive and natural.Our hypothesis seems intuitively natural also in so much as it tries to propose a "theory of guess ~. During the comprehension of a sentence guesses (CGII's) are progressively enriched and stored in a space of memory. During this process errors may he done.For some of them it is enough to ,aodify the previous guess while for others a real backtracking and reanalysis is necessary. Although the distinction between the two types of errors is unclear, it provides a valuable frame for further research in the domain of computational linguistics as well as psycholinguisties.In [>articular it seems to distim3uish in the activity of sentence comprehension a phase of structuring from a phase of perception. Errors occurring in t~,e former are remedied by ~nodifyin~ a guess, while those occurring in the latter need baektrackin~ and the choice of another strategy.A more serious systematization of the proposed functions, as well as the extension of the model to ~ore and more llnguistic phenomena are obvious extensions of the present project.Another direction where investigation seems to be particularly fruitfull is the relation between syntax and ser.:antics. On one hand, the fact that the result of the analysis is progressively stored in a unique space of uemory :lo not it:pose special constraints on the structure of the analyzed strin~. On the other hand, many of the presented functions include parameter slots for conditions which may be filled with any kind of test. This t~odel see:qs, therefore, to avoid "physiological" bounderies between syntax and semantics. The stored structure can be a semantic one and the tests can also incorporate se~;.antic descriptions. This seems to eventually lead to an easier integration of the two levels, h~e will present shortly [i0] a first approxi~.ation to a frame into which such in inte~ration can be realized. i. introduction: Certain types of sentences seem to defy the abilities of several parsers, and some of them are being now discussed by many computational linguists, mostly within the deterministic hypothesis.of their treatment within the traditional AT[: paradigm seems to suggest that the real discussion is about how to acces the left context and what its form should be. the content of a register (GETR) or transfer it to another register (combination of SETR with a GETR). This last operation is equivalent to i) the renaming of a register, if the source register is successively set to a different value, ii) the initialization of a register at a lower or higher level, if SENDR or LIFTR are used.Initialization is co~aonly used to i) raise lexical features to a higher level where they are used for tests (ex.: subject-verb agreement), ii) pass possible antecedents to lower levels where a gap may be detected in an embedded clause.The antecedent passing theoretically unlimited increase By the standard procedure, the m~biguous sentence(1) may cause a in storage load. analysis of the (I) Giovanni disse che aveva mentito John said that (he) had lied A. ATN Grammars "Giovanni" is always SENDRed as possible SUBJect of a complement, as soon as "disse" is recognized An ATN grammar is a set of networks formed by as an STF~ENS verb. As no subject NP is met after labelled states and directed arcs connecting them."che', an interpretation is yielded with The arcs can rlco~nize terminal (words) and "Giovanni" in subject position. The second non-temnlnal (lexical cateF, ories) s~anbols or interpretation is produced si,,ply by successively reeursively call for a network identified by the setting the SULqJ register to a d~;~my node, which label of an initial state. When such a call (i) The ambiguity of this sentence is the sa1:~e as its English translation where "he" can be bouud either to "John" or to soueone else ,~eutioned in a previous sentence. Italian has a gap instead of a pronoun. remains unfilled.The same treatment is recursively applied to sentences llke(2) Giovanni pensava che avrebbe raccontato John thought that (he) would have told a tutti che aveva fatto una to everybody that (he) had done a scoperta discovery where "Giovanni" must serve as subject of both the first and the second (linearly) complement.(3) Giovanni dlsse che i suol colleghl avevano John said that his colleagues had mentito lied Appendix:
null
null
null
null
{ "paperhash": [ "berwick|a_deterministic_parser_with_broad_coverage", "marcus|d-theory:_talking_about_talking_about_trees", "berwick|syntactic_constraints_and_efficient_parsability", "ferrari|revising_an_atn_parser", "cappelli|automatic_analysis_of_italian", "ferrari|strategy_selection_for_an_atn_syntactic_parser", "shipman|towards_minimal_data_structures_for_deterministic_parsing", "marcus|a_theory_of_syntactic_recognition_for_natural_language", "allen|a_functional_grammar", "bates|language_as_a_cognitive_process", "aho|the_theory_of_parsing,_translation,_and_compiling" ], "title": [ "A Deterministic Parser With Broad Coverage", "D-Theory: Talking about Talking about Trees", "Syntactic Constraints and Efficient Parsability", "Revising an ATN Parser", "Automatic analysis of Italian", "Strategy Selection for an ATN Syntactic Parser", "Towards Minimal Data Structures for Deterministic Parsing", "A theory of syntactic recognition for natural language", "A Functional Grammar", "Language as a Cognitive Process", "The Theory of Parsing, Translation, and Compiling" ], "abstract": [ "This paper is a progress report on a scries of three significant extensions to the original parsing design of (Marcus J980).* The extensions are: Ihe range of syntactic phenomena handled has been enlarged, encompassing sentences with Verb Phrase deletion, gapping, and rightward movement, and an additional output representation of anaphor-antcccdcnt relationships has been added (including pronoun and quantifier interpretation). A complete analysis of the parsing design has been carried out, clarifying the parser's relationship to the extended I R(k,t) parsing method as originally defined by (Knuth 1965) and explored by (Szymanski and Williams 1976). The formal model has led directly to the design of a \"stripped down\" parser that uses standard LR(k) technology and to results about the class of languages that can be handled by Marcus-style parsers (briefly, the class of languages is defined by those that can be handled by a deterministic, two-stack push-down automaton with severe restrictions on the transfer of material between the two sucks, and includes some strictly context-sensitive languages). 1 EXTENDING THE MARCUS PARSER While the Marcus parser handled a wide range of everyday syntactic constructions, there are many common English sentences that it could not analyze. One gap in its abilities arises because it did not have a way to represent the possibility of rightward movement that is, cases where a constituent is displaced to the right: A book [about nuclear disarmament] appeared yesterday. --> A book appeared yesterday [about nuclear disarmament]. Further, the only way that the Marcus parser could handle leftward movement was via the device of linking a \"dummy variable\" (a trace) to an antecedent occurring somewhere earlier in the sentence. For instance, the sentence, \"Who did Mary kiss?\" is parsed as, Who did Mary kiss trace!, where trace is a variable bound to its \"value\" of who, indicating the intuitive meaning of the sentence, \"For which X, did Mary kiss X\" . Jn the original parser design, a trace was of the category NP, so that only Noun Phrases could be linked to traces. But this meant that sentences where other than NPs are displaced or deleted cannot be analyzed. This includes the following kinds of sentences, where deleted material is indicated in square brackets.", "Linguists, including computational linguists, have always been fond of talking about trees. In this paper, we outline a theory of linguistic structure which talks about talking about trees; we call this theory Description theory (D-theory). While important issues must be resolved before a complete picture of D-theory emerges (and also before we can build programs which utilize it), we believe that this theory will ultimately provide a framework for explaining the syntax and semantics of natural language in a manner which is intrinsically computational. This paper will focus primarily on one set of motivations for this theory, those engendered by attempts to handle certain syntactic phenomena within the framework of deterministic parsing.", "A central goal of linguistic theory is to explain why natural languages are the way they are. It has often been supposed that computational considerations ought to play a role in this characterization, but rigorous arguments along these lines have been difficult to come by. In this paper we show how a key \"axiom\" of certain theories of grammar, Subjacency, can be explained by appealing to general restrictions on on-line parsing plus natural constraints on the rule-writing vocabulary of grammars. The explanation avoids the problems with Marcus' [1980] attempt to account for the same constraint. The argument is robust with respect to machine implementation, and thus avoids the problems that often arise when making detailed claims about parsing efficiency. It has the added virtue of unifying in the functional domain of parsing certain grammatically disparate phenomena, as well as making a strong claim about the way in which the grammar is actually embedded into an on-line sentence processor.", "1. An ATN p a r s e r f o r I t a l i a n h a s b e e n d e v e l o p e d a n d t e s t e d i n a s e r i e s o f e x p e r i m e n t s o n c o m p l e x s e n t e n c e s t a k e n f r o m n a r r a t i v e t e x t s ( 1 , 2 , 3 , 4 ) . The c o n s t r u c t i o n o f a c o m p l e x graemmz a s w e l l a s t h e r e s u l t s o f o u r e x p e r i m e n t s showed some l i m i t s and i n a d e q u a c i e s o f Woods\" ATN a s i t s t a n d s ( 1 1 ) , e s p e c i a l l y when u s e d t o p a r s e I t a l i a n , a r e l a t i v e l y f r e e o r d e r l a n g u a g e w i t h a w e l l d e v e l o p e d m o r p h o l o g i c a l s y s t e m .", "ATNSYS, an automatic syntactic analyser, has been used for a number of experiments with Italian texts. It is provided with a heuristic mechanism based on probability evaluation. A 'verb frame' representation is introduced. Roth these aspects are discussed and the results of our experiments are considered.", "I. It is impossible to measure the merits of a grammar, seen as the component of an analyser, in absolute terms. An \"ad hoc\" grammar, constructed for a limited set of sentences is, w i t h o u t d o u b t , more efficient in dealing with those particular sentences than a zrammer constructed for a larger set. Therefore, t h e first rudimentary criterion, when evaluating the relation~hlp between a grammar and a set of sentences, should be to establish whether this grammar is c a p a b l e of analysing these sentences. This is the determination of linguistic coverage, and necessitates the definition of the linguistic phenomena, independently of the linguistic theory which has been adopted to recognise these phenomena.", "The determinism hypothesis suggests that natural language may be parsed in a single pass without resort to backtracking techniques. The PARSIFAL system, developed by Marcus, incorporates this philosophy in an English language parser. Here, we show that the data structures used by this parser may be considerably simplified resulting in more elegant grammatical specifications.", "Abstract : Assume that the syntax of natural language can be parsed by a left-to-right deterministic mechanism without facilities for parallelism or backup. It will be shown that this 'determinism' hypothesis, explored within the context of the grammar of English, leads to a simple mechanism, a grammar interpreter. (Author)", "Functional Grammar describes grammar in functional terms in which a language is interpreted as a system of meanings. The language system consists of three macro-functions known as meta-functional components: the interpersonal function, the ideational function, and the textual function, all of which make a contribution to the structure of a text. The concepts discussed in Functional Grammar aims at giving contribution to the understanding of a text and evaluation of a text, which can be applied for text analysis. Using the concepts in Functional Grammar, English teachers may help the students learn how various grammatical features and grammatical systems are used in written texts so that they can read and write better.", "Books reviewed in the AJCL will be those of interest to computat ional linguists; books in closely related disciplines may also be considered. The purpose of a book review is to inform readers about the content of the book and to present opinions on the choice of material, manner of presentat ion, and suitability for various readers and purposes. There is no limit to the length of reviews. The appropriate length is determined by its content. If you wish to review a specific book, please contact me before doing so to check that it is not already under review by someone else. If you want to be on a list of potential reviewers, please send me your name and mailing address together with a list of keywords summarizing your areas of interest. You can also suggest books to be reviewed without volunteering to be the reviewer.", "From volume 1 Preface (See Front Matter for full Preface) \n \nThis book is intended for a one or two semester course in compiling theory at the senior or graduate level. It is a theoretically oriented treatment of a practical subject. Our motivation for making it so is threefold. \n \n(1) In an area as rapidly changing as Computer Science, sound pedagogy demands that courses emphasize ideas, rather than implementation details. It is our hope that the algorithms and concepts presented in this book will survive the next generation of computers and programming languages, and that at least some of them will be applicable to fields other than compiler writing. \n \n(2) Compiler writing has progressed to the point where many portions of a compiler can be isolated and subjected to design optimization. It is important that appropriate mathematical tools be available to the person attempting this optimization. \n \n(3) Some of the most useful and most efficient compiler algorithms, e.g. LR(k) parsing, require a good deal of mathematical background for full understanding. We expect, therefore, that a good theoretical background will become essential for the compiler designer. \n \nWhile we have not omitted difficult theorems that are relevant to compiling, we have tried to make the book as readable as possible. Numerous examples are given, each based on a small grammar, rather than on the large grammars encountered in practice. It is hoped that these examples are sufficient to illustrate the basic ideas, even in cases where the theoretical developments are difficult to follow in isolation. \n \nFrom volume 2 Preface (See Front Matter for full Preface) \n \nCompiler design is one of the first major areas of systems programming for which a strong theoretical foundation is becoming available. Volume I of The Theory of Parsing, Translation, and Compiling developed the relevant parts of mathematics and language theory for this foundation and developed the principal methods of fast syntactic analysis. Volume II is a continuation of Volume I, but except for Chapters 7 and 8 it is oriented towards the nonsyntactic aspects of compiler design. \n \nThe treatment of the material in Volume II is much the same as in Volume I, although proofs have become a little more sketchy. We have tried to make the discussion as readable as possible by providing numerous examples, each illustrating one or two concepts. \n \nSince the text emphasizes concepts rather than language or machine details, a programming laboratory should accompany a course based on this book, so that a student can develop some facility in applying the concepts discussed to practical problems. The programming exercises appearing at the ends of sections can be used as recommended projects in such a laboratory. Part of the laboratory course should discuss the code to be generated for such programming language constructs as recursion, parameter passing, subroutine linkages, array references, loops, and so forth." ], "authors": [ { "name": [ "R. Berwick" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Mitchell P. Marcus", "Donald Hindle", "Margaret M. Fleck" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Berwick", "A. Weinberg" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Ferrari", "I. Prodanof" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "A. Cappelli", "C. Ferrari", "L. Moretti", "I. Prodanof", "O. Stock" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Ferrari", "O. Stock" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "David W. Shipman", "Mitchell P. Marcus" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Mitchell P. Marcus" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "H. B. Allen", "M. Bryant" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Lyn Bates", "T. Winograd" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "A. Aho", "J. Ullman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "12595499", "5293716", "5991639", "6077224", "59856887", "28125035", "5399196", "6616065", "150098969", "2209224", "60775129" ], "intents": [ [], [ "methodology" ], [], [], [], [], [], [], [], [], [] ], "isInfluential": [ false, false, false, false, false, false, false, false, false, false, false ] }
- Problem: The Italian sentences discussed in the paper present difficulties that suggest a need for a substantial revision of the ATN formalism, particularly in the extraction of pieces of already processed information from the structure they have been inserted in. - Solution: The paper proposes a functional perspective for processing sentences, involving the retrieval of information from the left context stored in memory, leading to the development of a Current Global Hypothesis (CGH) and a general algorithm with three steps: identification of a trigger, extraction of constraints, and retrieving the required information.
497
0.002012
null
null
null
null
null
null
null
null
8c857c636f7a3c3fd84a554effdd7ae2a8ca7baa
12502799
null
An Experiment With Heuristic Parsing of {S}wedish
Heuristic parsing is the art of doing parsing in a haphazard and seemingly careless manner but in such a way that the outcome is still "good", at least from a statistical point of view, or, hopefully, even from a more absolute point of view. The idea is to find strategic shortcuts derived from guesses about the structure of a sentence based on scanty observations of linguistic units In the sentence. If the guess comes out right much parsing time can be saved, and if it does not, many subobservations may still be valid for revised guesses. In the (very preliminary) experiment reported here the main idea is to make use of (combinations of) surface phenomena as much as possible as the base for the prediction of the structure as a whole. In the parser to be developed along the lines sketched in this report main stress is put on arriving at independently working, parallel recognition procedures.
{ "name": [ "Brodda, Benny" ], "affiliation": [ null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
8
11
null
fully, even from a more absolute point of view. The idea is to find strategic shortcuts derived from guesses about the structure of a sentence based on scanty observations of linguistic units In the sentence. If the guess comes out right much parsing time can be saved, and if it does not, many subobservations may still be valid for revised guesses. In the (very preliminary) experiment reported here the main idea is to make use of (combinations of) surface phenomena as much as possible as the base for the prediction of the structure as a whole. In the parser to be developed along the lines sketched in this report main stress is put on arriving at independently working, parallel recognition procedures.The work reported here Is both aimed at simulatlng certain aspects of human language perception and at arriving at effective algorithms for actual parsing of running text. There is, indeed, a great need for fast such algorithms, e.g. for the analysis of the literally millions of words of running text that already today comprise the data bases in various large information retrieval systems, and which can be expected to expand several orders of magnitude both in importance and In size In the foreseeable future.The genera ! idea behind the system for heuristic parsing now being developed at our group in Stockholm dates more than 15 years back, when I was making an investigation (together with Hans Karlgren, Stockholm) of the possibilities of using computers for information retrieval purposes for the Swedish Governmental Board for Rationalization (Statskontoret) .In the course of this investigation we performed some psycholingulstic experiments aimed at finding out to what extent surface markers, such as endings, prepositions, conjunctions and other (bound) elements from typically closed categories of linguistic units, could serve as a base for a syntactic analysis of sentences. We sampled a couple of texts more or less at random and prepared them in such a way that stems of nouns, adjectives and (main) verbsthese categories being thought of as the main carriers of semantic Information -were substituted for by a mere "-", whereas other formatives were left in their original shape and place. These transformed texts were presented to subjects who were asked to fill in the gaps in such a way that the texts thus obtained were to be both syntactically correct and reasonably coherent.The result of the experiment was rather astonishing. It turned out that not only were the syntactic structures mainly restored, in some few cases also the original content was reestablished, almost word by word. (It was beyond any possibility that the subjects could have had access to the original text.)Even in those cases when the text itself was not restored to this remarkable extent, the stylistic value of the various texts was almost invariably reestablished; an originally lively, narrative story came out as a lively, narrative story , and a piece of rather dull, factual text (from a school text book on sociology) invariably came out as dull, factual prose.This experiment showed quite clearly that at least for Swedish the information contained in the combinations of surface markers to a remarkably high degree reflects the syntactic structure of the original text; in almost all cases also the stylistic value and in some few cases even the semantic content was kept. (The extent to which this is true is probably language dependent; Swedish is rather rich in morphology, and this property is certainly a contributing factor for an experiment of this type to come out successful to the extent it actually did.)This type of experiment has since then been repeated many times by many scholars; in fact, it ls one of the standard ways to demonstrate the concept of redundancy in texts. But there are several other important conclusions one could draw from this type of experiments. First of all, of course, the obvious conclusion that surface signals do carry a lot of information about the structure of sentences, probably much more than one has been inclined to think, and, consequently, It could be worth while to try to capture that Information in some kind of automatic analysis system. This is the practical side of it. But there is more to it. One must ask the question why a language llke Swedish is llke this. What are the theoretical implications?Much Interest has been devoted in later years to theories (and speculations) about human per-ception of linguistic stimuli, and I do not think that one speculates too much if one assumes that surface markers of the type that appeared in the described experiment together constitute important clues concerning the gross syntactic structure of sentences (or utterances), clues that are probably much less consiously perceived than, e.g., the actual words in the sentences/utterances. To the extent that such clues are actually perceived they are obviously perceived simultaneously with, i.e. in parallel with, other units (words, for instance).The above way of looking upon perception as a set of independently operating processes is, of course, more or less generally accepted nowadays (cf., e.g., Lindsay-Norman 1977) , and it is also generally accepted in computational linguistics that any program that aims at simulating perception in one way or other must have features that simulates (or, even better, actually performs) parallel processing, and the analysis system to be described below has much emphasis on exactly this feature.Another common saying nowadays when discussing parsing techniques is that one should try to incorporate "heuristic devices" (cf., e.g., the many subreports related to the big ARPAproject concerning Speech Recognition and Understanding 1970-76), although there does not seem to exist a very precise consensus of what exactly that would mean. (In mathematics the term has been traditionally used to refer to informal reasoning, especially when used in classroom situations.In a famous study the hungarian mathematician Polya, 1945 put forth the thesis that heuristics is one of the most important psychological driving mechanisms behind mathematical -or scientific -progress. In AIliterature it is often used to refer to shortcut search methods in semantic networks/spaces; c.f. Lenat, 1982) .One reason for trying to adopt some kind of heuristic device in the analysis procedures is that one for mathematical reasons knows that ordinary, "careful", parsing algorithms inherently seem to refuse to work in real time (i.e. in linear time), whereas human beings, on the whole, seem to be able to do exactly that (i.e. perceive sentences or utterances simultaneously with their production). Parallel processing may partly be an answer to that dilemma, but still, any process that claims to actually simulate some part of human perception must in some way or other simulate the remarkable abilities human beings have in grasping complex patterns ("gestalts") seemingly in one single operation.Ordinary, careful, parsing algorithms are often organized according to some general principle such as "top-down", "bottom-to-top", "breadth first", "depth first", etc., these headings referring to some specified type of "strategy". The heuristic model we are trying to work out has no such preconceived strategy built into it. Our philosophy is instead rather anarchistic (The Heuristic Principle): Whatever linguistic unit that can be identified at whatever stage of the analysis, according to whatever means there are, i_~s identified, and the significance of the fact that the unit in question has been identified is made use of in all subsequent stages of the analysis. At any time one must.be prepared to reconsider an already established analysis of a unit on the ground that evidence a~alnst the analysis may successively accumulate due to what analyses other units arrive at.In next section we give a brief description of the analysis system for Swedish that is now under development at our group in Stockholm. As has been said, much effort is spent on trying to make use of surface signals as much as possible. Not that we believe that surface signals play a more important role than any other type of linguistic signals, but rather that we think it is important to try to optimize each single subprocess (in a parallel system) as much as ~osslble, and, as said, it might be worth while to look careful into this level, because the importance of surface signals might have been underestimated in previous research. Our exneriments so far seem to indicate that they constitute excellent units to base heuristic guesses on. Another reason for concentrating our efforts on this level is that it takes time and requires much hard computational work to get such an anarchistic system to really work, and this surface level is reasonably simple to handle.Figure 1 below shows the general outline of the system. Each of the various boxes (or subboxes) represents one specific process, usually a complete computer program in itself, or, in some cases, independent processes within a program. The big "container", labelled "The Pool", contains both the linguistic material as well as the current analysis of it. Each program or process looks into the Pool for things "it" can recognize, and when the process finds anything it is trained to recognize, it adds its observation to the material in the Pool. This added material may (hopefully) help other processes in recognizing what they are trained to recognize, which in its turn may again help the first process to recognize more of "its" units. And so on.The system is now under development and during this build-up phase each process is, as was said above, essentially a complete, stand-alone module, and the Pool exists simply as successively updated text files on a disc storage.At the moment some programs presuppose that other programs have already been run, but this state of affairs will be valid Just during this build~up phase. At the end of the build-up phase each program shall be able to run completely independent of any other program in the system and in arbitrary order relative to the others (but, of course, usually perform better if more information is available in the Pool).In the ~econd phase superordinated control programs are to be implemented. These programs will function as "traffic rules" and via these systems one shall be able to test various strategies, i.e. to test which relative order between the different subsystems that yields optimal resuit in some kind of "performance metric", some evaluation procedure that takes both speed and quality into account.The programs/processes shown in Figure i all represent rather straightforward Finite State Pattern Matching (FS/PM) procedures. It is rather trivial to show mathematically that a set of interacting FS/PM procedures of the type used in our system together will yield a system that formally has the power of a CF-parser; in practice it will yield a system that in some sense is stronger, at least from the point of view of convenience. Congruence and similar phenomena will be reduced to simple local observations. Transformational variants of sentences will be recognized directly -there will be no need for performing some kind of backward transformational operations. (In this respect a system llke this will resemble Gazdar's grammar concept; Gazdar 1980. ) The control structures later to be superimposed on the interacting FS/PM systems will also be of a Finite State type. A system of the type then obtained -a system of independent Finite State Automatons controlled by another Finite State Automaton -will in principle have rather complex mathematical properties. It is, e.g., rather easy to see that such a system has stronger capacity than a Type 2 device, but it will not have the power of a full Type I system.The "balloons" in the figure represent independent programs (later to be developed into independent processes inside one "big" program). The figure displays those programs that so far (January 1983) have been implemented and tested (to some extent). Other programs will successively be entered into the system.The big balloon labelled "The Closed Cat" represents a program that recognizes closed word classes such as prepositions, conjunctions, pronouns, auxiliaries, and so on. The Closed Cat recognizes full word forms directly. The SMURF balloon represents the morphological component (SMURF = "Swedish Murphology"). SMURF itself is organized internally as a complex system of independently operating "demons" -SMURFs -each knowing "its' little corner of Swedish word formation. (The name of the program is an allusion to the popular comic strip leprechauns "les Schtroumpfs", which in Swedish are called "smurfar".) Thus there is one little smurf recognizing derivat[onal morphemes, one recognizing flectional endings, and so on. One special smurf, Phonotax, has an important controlling functionevery other smurf must always consult Phonotax before identifying one of "its" (potential) forma-tires; the word minus this formative must still be pronounceable, otherwise it cannot be a formative. SMURF works entirely without stem lexicon; it adheres completely to the "philosophy" of using surface signals as far as possible.NOMFRAS, VERBAL, IFIGEN, CLAUS and PREPPS are other "demons" that recognize different phrases or word groups within sentences, viz. noun phrases, verbal complexes, infinitival constructions, clauses and prepositional phrases, respectively. N-lex, V-lex and A-lex represent various (sub)lexicons; so far we have tried to do without them as far as possible. One should observe that stem lexicons are no prerequisites for the system to work, adding them only enhances its performance.The format of the material inside the Pool is the original text, plus appropriate "labelled brackets" enclosing words, word groups, phrases and so on. In this way, the form of representation is consistent throughout, no matter how many different types of analyses have been applied to it. Thus, various people can join our group and write their own "demons" in whatever language they prefer, as long as they can take sentences in text format, be reasonably tolerant to what types of '~rackets" they find in there, do their analysis, add their own brackets (in the specified format), and put the result back into the Pool.Of the various programs SMURF, NOMFRAS and IFIGEN are extensively tested (and, of course, The Closed Cat, which is a simple lexical lookup system), and various examples of analyses of these programs will be demonstrated in the next section. We hope to arrive at a crucial station in this project during 1983, when CLAUS has been more thoroughly tested. If CLAUS performs the way we hope (and preliminary tests indicate that it will), we will have means to identify very quickly the clausal structures of the sentences in an arbitrary running text, thus having a firm base for entering higher hierarchies in the syntactic domains.The programs are written in the Beta language developed by the present author; c.f. Brodda-Karlsson, 1980, and Brodda, 1983 , forthcoming. Of the actual programs in the system, SMURF was developed and extensively tested by B.B. during 1977-79 (Brodda, 1979) , whereas the others are (being) developed by B.B. and/or Gunnel KEllgren, Stockholm (mostly "and").When a "fresh" text is entered into The Pool it first passes through a preliminary one-passprogram, INIT, (not shown in Fig. i ) that "normalizes" the text. The original text may be of any type as long as it Is regularly typed Swedish. INIT transforms the text so that each graphic sentence will make up exactly one physical record. (Except in poetry, physical records, i.e. lines, usually are of marginal linguistic interest.) Paragraph ends will be represented by empty records. Periods used to indicate abbreviations are Just taken away and the abbreviation itself is contracted to one graphic word, if necessary; thus "t.ex." ("e.g.") is transformed into "rex", and so on. Otherwise, periods, commas, question marks and other typographic characters are provided with preceding blanks. Through this each word is guaranteed to be surrounded by blanks, and delimiters llke commas, periods and so on are guaranteed to signal their "normal" textual functions. Each record is also ended by a sentence delimiter (preceded by a blank). Some manual postediting is sometimes needed in order to get the text normalized according to the above. In the INIT-phase no linguistic analysis whatsoever is introduced (other than into what appears to be orthographic sentences).INIT also changes all letters in the original text to their corresponding upper case variants. (Originally capital letters are optionally provided with a prefixed "=".) All subsequent analysis programs add their analyses In the form of lower case letters or letter combinations. Thus upper case letters or words will belong to the object language, and lower case letters or letter combinations will signal meta-language information. In this way, strictly text (ASCII) format can be kept for the text as well as for the various stages of its analysis; the "philosophy" to use text Input and text output for all programs involved represents the computational solution to the problem of how to make it possible for each process to work independently of all other in the system.The Closed Cat (CC) has the important role to mark words belonging to some well defined closed categories of words. This program makes no internal analysis of the words, and only takes full words into account. CC makes use of simple rewrite rules of the type '~ => eP~e / (blank)__(blank)", where the inserted e's represent the "analysis" ("e" stands for "preposition"; P~ = "on"). A sample output from The Closed Cat is shown in illustration 2, where the various meta-symbols also are explained.The simple example above also shows the format of inserted meta-lnformatlon. Each Identified constituent is "tagged" with surrounding lower case letters, which then can be conceived of as labelled brackets. This format is used throughout the system, also for complex constituents. Thus the nominal phrase 'DEN LILLA FLICKAN" ("the little girl") will be tagged as "'nDEN+LILLA+FLICKANn" by NOMFRAS (cf. below; the pluses are inserted to make the constituent one continuous string). We have reserved the letters n, v and s for the major categories nouns or noun phrases, verbs or verbal groups, and sentences, respectively, whereas other more or less transparent letters are used for other categories. (A list of used category symbols is presented in the Appendix:Printout Illustrations.)The program SWEMRF (or sMuRF, as it is called here) has been extensively described elsewhere (Brodda, 1979) . It makes a rather intricate morphological analysis word-by-word In running text (i.e. SMURF analyzes each word in itself, disregarding the context it appears in). SMURF can be run in two modes, in "segmentation" mode and "analysis" mode. In its segmentation mode SMURF simply strips off the possible affixes from each word; it makesno use of any stem lexicon. (The affixes it recognizes are common prefixes, suffixes -i.e. derlvatlonal morphemes -and flexlonal endings.) In analysis mode it also tries to make an optimal guess of the word class of.the word under inspection, based on what (combinations of) word formation elements it finds in the word. SMURF in itself is organized entirely according to the heuristic principles as they are conceived here, i.e. as a set of independently operating processes that interactively work on each others output. The SMURF system has been the test bench for testing out the methods now being used throughout the entire Heuristic Parsing Project.In its segmentation mode SMURF functions formally as a set of interactive transformations, where the structural changes happen to be extremely simple, viz. simple segmentation rules of the type 'T=>P-", "Sffi> -S" and "Effi>-E '' for an arbitrary Prefix, Suffix and Ending, respectively, but where the "Job" essentially consists of establishing the corresponding structural descriptions. These are shown in III. I, below, together with sample analyses. It should be noted that phonotactlc constraints play a central role in the SMURF system; in fact, one of the main objectives in designing the SMURF system was to find out how much information actually was carried by the phonntactlc component in Swedish.(It turned out to be quite much; cf. Brodda 1979. This probably holds for other Germanic languages as well, which all have a rather elaborated phonotaxis.)NOMFRAS is the next program to be commented on. The present version recognizes structures of the type det/quant + (adJ)~ + noun;where the "det/quant" categories (i.e. determiners or quantlflers) are defined explicitly through enumeration -they are supposed to belong to the class of "surface markers" and are as such identified by The Closed Cat. Adjectives and nouns on the other hand are identified solely on the ground of their "cadences", i.e. what kind of (formally) endlng-llke strings they happen to end with. The number of adjectives that are accepted (n in the formula above) varies depending on what (probable) type of construction is under inspection. In indefinite noun phrases the substantial content of the expected endings is, to say the least, meager, as both nouns and adjectives in many situations only have O-endings. In definite noun phrases the noun mostly -but not always -has a more substantial and recognizable ending and all intervening ad-Jectives have either the cadence -A or a cadence from a small but characteristic set. In a (supposed) definite noun phrase all words ending in any of the mentioned cadences are assumed to be adjectives, but in (supposed) indefinite noun phrases not more than one adjective is assumed unless other types of morphological support are present.The Finite State Scheme behind NOMFRAS is presented in Ill. 2, together with sample outputs; in this case the text has been preprocessed by The Closed Cat, and it appears that these two programs in cooperation are able to recognize noun phrases of the discussed type correctly to well over 95% in running text (at a speed of about 5 sentences per second, CPU-tlme); the errors were shared about 50% each between over-and undergenerations. Preliminary experiments aiming at including also SMURF and FREPPS (Prepositional Phrases) seem to indicate that about the same recall and precision rate could be kept for arbitrary types of (nonsententlal) noun phrases (cf. Iii. 6). (The systems are not yet trimmed to the extent that they can be operatively run together.) Aux n>Adv)o ATT -- -A # (C)CV -(A/I)T # Iwhere '~ux" and "Adv" are categories recognized by The Closed Cat (tagged "g" and "a", respectively), and "nXn" are structures recognized by either NOMFRAS or, in the case of personal pronouns, by CC (It should he worth mentioning that the class of auxiliaries in Swedish is more open than the corresponding word class in English; besides the "ordinary" VARA ("to be"), HA ("to have") and the modalsy, there is a fuzzy class of seml-auxillarles llke BORJA ("begin") and others; IFIGEN makes use of about 20 of these in the present version.) The supine cadence -(A/I)'T is supposed to appear only once in an infinitival group. A sample output of IFIGEN is given in Iii. 3. Also for IFIGEN we have reached a recognition level around 95%, which, again, is rather astonishing, considering how little information actually is made use of in the system.illustrates very clearly one of the central points in our heuristic approach, namely the following: The information that a word has a specific cadence, in this case the cadence -A, is usually of very llttle significance in itself in Swedish. Certainly it is a typical infinltlval cadence (at least 90% of all infinitives in Swedish have it), but on the other hand, it is certainly a very typical cadence for other types of words as well: FLICKA (noun), HELA (adjective), DENNA/DETTA/DESSA (determiners or pronouns) and so on, and these other types are by no comparison the dominant group having this specific cadence in running text. But, in connection with an "infinitive warner" -an auxiliary, or the word ATT -the situation changes dramatically. This can be demonstrated by the following figures: In running text words having the cadance -A represents infinitives in about 30% of the cases. ATT is an infinitive marker (equivalent to "to") in quite exactly 50% of its occurences (the other 50% it is a subordinating conjunction). The conditional probability that the configuration ATT ..-A represents an inflnltve is, however, greater than 99%, provided that characteristic cadences like -ARNA/-ORNA and quantiflers/determiners llke ALLA and DESSA are disregarded (In our system they are marked by SMURF and The Closed Cat, respectively, and thereby "saved" from being classified as infinitives.)Given this, there is almost no overgeneration in IFIGEN, but Swedish allows for split infinitives to some extent. Quite much material can be put in between the infinitive warner and the infinitive, and this gives rise to some undergeneration (presengly). (Similar observations regarding conditional probabilities in configurations of linguistic units has been made by Mats Eeg-Olofson, Lund, 1982) . I' I #p> -p - X " V " F (s) -- V " X ; P => (-)P>3) SUFFIXES (S):l (s) I " V " x 1 X " v " F "_S -E# # S :> /S(-)where I : (admissible) initial cluster, F = final cluster, M = mor-he-m-eTnternal cluster, V = vowel, (s) the "gluon"S (cf. TID~INGSMA~), # = word boundary, (=,>,/,-) = earlier accepted affix segmentations, and , finallay, denotes ordinary concatenation. (It is the enhanced element in each pattern that is tested for its segmentability).
null
null
null
null
Main paper: : fully, even from a more absolute point of view. The idea is to find strategic shortcuts derived from guesses about the structure of a sentence based on scanty observations of linguistic units In the sentence. If the guess comes out right much parsing time can be saved, and if it does not, many subobservations may still be valid for revised guesses. In the (very preliminary) experiment reported here the main idea is to make use of (combinations of) surface phenomena as much as possible as the base for the prediction of the structure as a whole. In the parser to be developed along the lines sketched in this report main stress is put on arriving at independently working, parallel recognition procedures.The work reported here Is both aimed at simulatlng certain aspects of human language perception and at arriving at effective algorithms for actual parsing of running text. There is, indeed, a great need for fast such algorithms, e.g. for the analysis of the literally millions of words of running text that already today comprise the data bases in various large information retrieval systems, and which can be expected to expand several orders of magnitude both in importance and In size In the foreseeable future.The genera ! idea behind the system for heuristic parsing now being developed at our group in Stockholm dates more than 15 years back, when I was making an investigation (together with Hans Karlgren, Stockholm) of the possibilities of using computers for information retrieval purposes for the Swedish Governmental Board for Rationalization (Statskontoret) .In the course of this investigation we performed some psycholingulstic experiments aimed at finding out to what extent surface markers, such as endings, prepositions, conjunctions and other (bound) elements from typically closed categories of linguistic units, could serve as a base for a syntactic analysis of sentences. We sampled a couple of texts more or less at random and prepared them in such a way that stems of nouns, adjectives and (main) verbsthese categories being thought of as the main carriers of semantic Information -were substituted for by a mere "-", whereas other formatives were left in their original shape and place. These transformed texts were presented to subjects who were asked to fill in the gaps in such a way that the texts thus obtained were to be both syntactically correct and reasonably coherent.The result of the experiment was rather astonishing. It turned out that not only were the syntactic structures mainly restored, in some few cases also the original content was reestablished, almost word by word. (It was beyond any possibility that the subjects could have had access to the original text.)Even in those cases when the text itself was not restored to this remarkable extent, the stylistic value of the various texts was almost invariably reestablished; an originally lively, narrative story came out as a lively, narrative story , and a piece of rather dull, factual text (from a school text book on sociology) invariably came out as dull, factual prose.This experiment showed quite clearly that at least for Swedish the information contained in the combinations of surface markers to a remarkably high degree reflects the syntactic structure of the original text; in almost all cases also the stylistic value and in some few cases even the semantic content was kept. (The extent to which this is true is probably language dependent; Swedish is rather rich in morphology, and this property is certainly a contributing factor for an experiment of this type to come out successful to the extent it actually did.)This type of experiment has since then been repeated many times by many scholars; in fact, it ls one of the standard ways to demonstrate the concept of redundancy in texts. But there are several other important conclusions one could draw from this type of experiments. First of all, of course, the obvious conclusion that surface signals do carry a lot of information about the structure of sentences, probably much more than one has been inclined to think, and, consequently, It could be worth while to try to capture that Information in some kind of automatic analysis system. This is the practical side of it. But there is more to it. One must ask the question why a language llke Swedish is llke this. What are the theoretical implications?Much Interest has been devoted in later years to theories (and speculations) about human per-ception of linguistic stimuli, and I do not think that one speculates too much if one assumes that surface markers of the type that appeared in the described experiment together constitute important clues concerning the gross syntactic structure of sentences (or utterances), clues that are probably much less consiously perceived than, e.g., the actual words in the sentences/utterances. To the extent that such clues are actually perceived they are obviously perceived simultaneously with, i.e. in parallel with, other units (words, for instance).The above way of looking upon perception as a set of independently operating processes is, of course, more or less generally accepted nowadays (cf., e.g., Lindsay-Norman 1977) , and it is also generally accepted in computational linguistics that any program that aims at simulating perception in one way or other must have features that simulates (or, even better, actually performs) parallel processing, and the analysis system to be described below has much emphasis on exactly this feature.Another common saying nowadays when discussing parsing techniques is that one should try to incorporate "heuristic devices" (cf., e.g., the many subreports related to the big ARPAproject concerning Speech Recognition and Understanding 1970-76), although there does not seem to exist a very precise consensus of what exactly that would mean. (In mathematics the term has been traditionally used to refer to informal reasoning, especially when used in classroom situations.In a famous study the hungarian mathematician Polya, 1945 put forth the thesis that heuristics is one of the most important psychological driving mechanisms behind mathematical -or scientific -progress. In AIliterature it is often used to refer to shortcut search methods in semantic networks/spaces; c.f. Lenat, 1982) .One reason for trying to adopt some kind of heuristic device in the analysis procedures is that one for mathematical reasons knows that ordinary, "careful", parsing algorithms inherently seem to refuse to work in real time (i.e. in linear time), whereas human beings, on the whole, seem to be able to do exactly that (i.e. perceive sentences or utterances simultaneously with their production). Parallel processing may partly be an answer to that dilemma, but still, any process that claims to actually simulate some part of human perception must in some way or other simulate the remarkable abilities human beings have in grasping complex patterns ("gestalts") seemingly in one single operation.Ordinary, careful, parsing algorithms are often organized according to some general principle such as "top-down", "bottom-to-top", "breadth first", "depth first", etc., these headings referring to some specified type of "strategy". The heuristic model we are trying to work out has no such preconceived strategy built into it. Our philosophy is instead rather anarchistic (The Heuristic Principle): Whatever linguistic unit that can be identified at whatever stage of the analysis, according to whatever means there are, i_~s identified, and the significance of the fact that the unit in question has been identified is made use of in all subsequent stages of the analysis. At any time one must.be prepared to reconsider an already established analysis of a unit on the ground that evidence a~alnst the analysis may successively accumulate due to what analyses other units arrive at.In next section we give a brief description of the analysis system for Swedish that is now under development at our group in Stockholm. As has been said, much effort is spent on trying to make use of surface signals as much as possible. Not that we believe that surface signals play a more important role than any other type of linguistic signals, but rather that we think it is important to try to optimize each single subprocess (in a parallel system) as much as ~osslble, and, as said, it might be worth while to look careful into this level, because the importance of surface signals might have been underestimated in previous research. Our exneriments so far seem to indicate that they constitute excellent units to base heuristic guesses on. Another reason for concentrating our efforts on this level is that it takes time and requires much hard computational work to get such an anarchistic system to really work, and this surface level is reasonably simple to handle.Figure 1 below shows the general outline of the system. Each of the various boxes (or subboxes) represents one specific process, usually a complete computer program in itself, or, in some cases, independent processes within a program. The big "container", labelled "The Pool", contains both the linguistic material as well as the current analysis of it. Each program or process looks into the Pool for things "it" can recognize, and when the process finds anything it is trained to recognize, it adds its observation to the material in the Pool. This added material may (hopefully) help other processes in recognizing what they are trained to recognize, which in its turn may again help the first process to recognize more of "its" units. And so on.The system is now under development and during this build-up phase each process is, as was said above, essentially a complete, stand-alone module, and the Pool exists simply as successively updated text files on a disc storage.At the moment some programs presuppose that other programs have already been run, but this state of affairs will be valid Just during this build~up phase. At the end of the build-up phase each program shall be able to run completely independent of any other program in the system and in arbitrary order relative to the others (but, of course, usually perform better if more information is available in the Pool).In the ~econd phase superordinated control programs are to be implemented. These programs will function as "traffic rules" and via these systems one shall be able to test various strategies, i.e. to test which relative order between the different subsystems that yields optimal resuit in some kind of "performance metric", some evaluation procedure that takes both speed and quality into account.The programs/processes shown in Figure i all represent rather straightforward Finite State Pattern Matching (FS/PM) procedures. It is rather trivial to show mathematically that a set of interacting FS/PM procedures of the type used in our system together will yield a system that formally has the power of a CF-parser; in practice it will yield a system that in some sense is stronger, at least from the point of view of convenience. Congruence and similar phenomena will be reduced to simple local observations. Transformational variants of sentences will be recognized directly -there will be no need for performing some kind of backward transformational operations. (In this respect a system llke this will resemble Gazdar's grammar concept; Gazdar 1980. ) The control structures later to be superimposed on the interacting FS/PM systems will also be of a Finite State type. A system of the type then obtained -a system of independent Finite State Automatons controlled by another Finite State Automaton -will in principle have rather complex mathematical properties. It is, e.g., rather easy to see that such a system has stronger capacity than a Type 2 device, but it will not have the power of a full Type I system.The "balloons" in the figure represent independent programs (later to be developed into independent processes inside one "big" program). The figure displays those programs that so far (January 1983) have been implemented and tested (to some extent). Other programs will successively be entered into the system.The big balloon labelled "The Closed Cat" represents a program that recognizes closed word classes such as prepositions, conjunctions, pronouns, auxiliaries, and so on. The Closed Cat recognizes full word forms directly. The SMURF balloon represents the morphological component (SMURF = "Swedish Murphology"). SMURF itself is organized internally as a complex system of independently operating "demons" -SMURFs -each knowing "its' little corner of Swedish word formation. (The name of the program is an allusion to the popular comic strip leprechauns "les Schtroumpfs", which in Swedish are called "smurfar".) Thus there is one little smurf recognizing derivat[onal morphemes, one recognizing flectional endings, and so on. One special smurf, Phonotax, has an important controlling functionevery other smurf must always consult Phonotax before identifying one of "its" (potential) forma-tires; the word minus this formative must still be pronounceable, otherwise it cannot be a formative. SMURF works entirely without stem lexicon; it adheres completely to the "philosophy" of using surface signals as far as possible.NOMFRAS, VERBAL, IFIGEN, CLAUS and PREPPS are other "demons" that recognize different phrases or word groups within sentences, viz. noun phrases, verbal complexes, infinitival constructions, clauses and prepositional phrases, respectively. N-lex, V-lex and A-lex represent various (sub)lexicons; so far we have tried to do without them as far as possible. One should observe that stem lexicons are no prerequisites for the system to work, adding them only enhances its performance.The format of the material inside the Pool is the original text, plus appropriate "labelled brackets" enclosing words, word groups, phrases and so on. In this way, the form of representation is consistent throughout, no matter how many different types of analyses have been applied to it. Thus, various people can join our group and write their own "demons" in whatever language they prefer, as long as they can take sentences in text format, be reasonably tolerant to what types of '~rackets" they find in there, do their analysis, add their own brackets (in the specified format), and put the result back into the Pool.Of the various programs SMURF, NOMFRAS and IFIGEN are extensively tested (and, of course, The Closed Cat, which is a simple lexical lookup system), and various examples of analyses of these programs will be demonstrated in the next section. We hope to arrive at a crucial station in this project during 1983, when CLAUS has been more thoroughly tested. If CLAUS performs the way we hope (and preliminary tests indicate that it will), we will have means to identify very quickly the clausal structures of the sentences in an arbitrary running text, thus having a firm base for entering higher hierarchies in the syntactic domains.The programs are written in the Beta language developed by the present author; c.f. Brodda-Karlsson, 1980, and Brodda, 1983 , forthcoming. Of the actual programs in the system, SMURF was developed and extensively tested by B.B. during 1977-79 (Brodda, 1979) , whereas the others are (being) developed by B.B. and/or Gunnel KEllgren, Stockholm (mostly "and").When a "fresh" text is entered into The Pool it first passes through a preliminary one-passprogram, INIT, (not shown in Fig. i ) that "normalizes" the text. The original text may be of any type as long as it Is regularly typed Swedish. INIT transforms the text so that each graphic sentence will make up exactly one physical record. (Except in poetry, physical records, i.e. lines, usually are of marginal linguistic interest.) Paragraph ends will be represented by empty records. Periods used to indicate abbreviations are Just taken away and the abbreviation itself is contracted to one graphic word, if necessary; thus "t.ex." ("e.g.") is transformed into "rex", and so on. Otherwise, periods, commas, question marks and other typographic characters are provided with preceding blanks. Through this each word is guaranteed to be surrounded by blanks, and delimiters llke commas, periods and so on are guaranteed to signal their "normal" textual functions. Each record is also ended by a sentence delimiter (preceded by a blank). Some manual postediting is sometimes needed in order to get the text normalized according to the above. In the INIT-phase no linguistic analysis whatsoever is introduced (other than into what appears to be orthographic sentences).INIT also changes all letters in the original text to their corresponding upper case variants. (Originally capital letters are optionally provided with a prefixed "=".) All subsequent analysis programs add their analyses In the form of lower case letters or letter combinations. Thus upper case letters or words will belong to the object language, and lower case letters or letter combinations will signal meta-language information. In this way, strictly text (ASCII) format can be kept for the text as well as for the various stages of its analysis; the "philosophy" to use text Input and text output for all programs involved represents the computational solution to the problem of how to make it possible for each process to work independently of all other in the system.The Closed Cat (CC) has the important role to mark words belonging to some well defined closed categories of words. This program makes no internal analysis of the words, and only takes full words into account. CC makes use of simple rewrite rules of the type '~ => eP~e / (blank)__(blank)", where the inserted e's represent the "analysis" ("e" stands for "preposition"; P~ = "on"). A sample output from The Closed Cat is shown in illustration 2, where the various meta-symbols also are explained.The simple example above also shows the format of inserted meta-lnformatlon. Each Identified constituent is "tagged" with surrounding lower case letters, which then can be conceived of as labelled brackets. This format is used throughout the system, also for complex constituents. Thus the nominal phrase 'DEN LILLA FLICKAN" ("the little girl") will be tagged as "'nDEN+LILLA+FLICKANn" by NOMFRAS (cf. below; the pluses are inserted to make the constituent one continuous string). We have reserved the letters n, v and s for the major categories nouns or noun phrases, verbs or verbal groups, and sentences, respectively, whereas other more or less transparent letters are used for other categories. (A list of used category symbols is presented in the Appendix:Printout Illustrations.)The program SWEMRF (or sMuRF, as it is called here) has been extensively described elsewhere (Brodda, 1979) . It makes a rather intricate morphological analysis word-by-word In running text (i.e. SMURF analyzes each word in itself, disregarding the context it appears in). SMURF can be run in two modes, in "segmentation" mode and "analysis" mode. In its segmentation mode SMURF simply strips off the possible affixes from each word; it makesno use of any stem lexicon. (The affixes it recognizes are common prefixes, suffixes -i.e. derlvatlonal morphemes -and flexlonal endings.) In analysis mode it also tries to make an optimal guess of the word class of.the word under inspection, based on what (combinations of) word formation elements it finds in the word. SMURF in itself is organized entirely according to the heuristic principles as they are conceived here, i.e. as a set of independently operating processes that interactively work on each others output. The SMURF system has been the test bench for testing out the methods now being used throughout the entire Heuristic Parsing Project.In its segmentation mode SMURF functions formally as a set of interactive transformations, where the structural changes happen to be extremely simple, viz. simple segmentation rules of the type 'T=>P-", "Sffi> -S" and "Effi>-E '' for an arbitrary Prefix, Suffix and Ending, respectively, but where the "Job" essentially consists of establishing the corresponding structural descriptions. These are shown in III. I, below, together with sample analyses. It should be noted that phonotactlc constraints play a central role in the SMURF system; in fact, one of the main objectives in designing the SMURF system was to find out how much information actually was carried by the phonntactlc component in Swedish.(It turned out to be quite much; cf. Brodda 1979. This probably holds for other Germanic languages as well, which all have a rather elaborated phonotaxis.)NOMFRAS is the next program to be commented on. The present version recognizes structures of the type det/quant + (adJ)~ + noun;where the "det/quant" categories (i.e. determiners or quantlflers) are defined explicitly through enumeration -they are supposed to belong to the class of "surface markers" and are as such identified by The Closed Cat. Adjectives and nouns on the other hand are identified solely on the ground of their "cadences", i.e. what kind of (formally) endlng-llke strings they happen to end with. The number of adjectives that are accepted (n in the formula above) varies depending on what (probable) type of construction is under inspection. In indefinite noun phrases the substantial content of the expected endings is, to say the least, meager, as both nouns and adjectives in many situations only have O-endings. In definite noun phrases the noun mostly -but not always -has a more substantial and recognizable ending and all intervening ad-Jectives have either the cadence -A or a cadence from a small but characteristic set. In a (supposed) definite noun phrase all words ending in any of the mentioned cadences are assumed to be adjectives, but in (supposed) indefinite noun phrases not more than one adjective is assumed unless other types of morphological support are present.The Finite State Scheme behind NOMFRAS is presented in Ill. 2, together with sample outputs; in this case the text has been preprocessed by The Closed Cat, and it appears that these two programs in cooperation are able to recognize noun phrases of the discussed type correctly to well over 95% in running text (at a speed of about 5 sentences per second, CPU-tlme); the errors were shared about 50% each between over-and undergenerations. Preliminary experiments aiming at including also SMURF and FREPPS (Prepositional Phrases) seem to indicate that about the same recall and precision rate could be kept for arbitrary types of (nonsententlal) noun phrases (cf. Iii. 6). (The systems are not yet trimmed to the extent that they can be operatively run together.) Aux n>Adv)o ATT -- -A # (C)CV -(A/I)T # Iwhere '~ux" and "Adv" are categories recognized by The Closed Cat (tagged "g" and "a", respectively), and "nXn" are structures recognized by either NOMFRAS or, in the case of personal pronouns, by CC (It should he worth mentioning that the class of auxiliaries in Swedish is more open than the corresponding word class in English; besides the "ordinary" VARA ("to be"), HA ("to have") and the modalsy, there is a fuzzy class of seml-auxillarles llke BORJA ("begin") and others; IFIGEN makes use of about 20 of these in the present version.) The supine cadence -(A/I)'T is supposed to appear only once in an infinitival group. A sample output of IFIGEN is given in Iii. 3. Also for IFIGEN we have reached a recognition level around 95%, which, again, is rather astonishing, considering how little information actually is made use of in the system.illustrates very clearly one of the central points in our heuristic approach, namely the following: The information that a word has a specific cadence, in this case the cadence -A, is usually of very llttle significance in itself in Swedish. Certainly it is a typical infinltlval cadence (at least 90% of all infinitives in Swedish have it), but on the other hand, it is certainly a very typical cadence for other types of words as well: FLICKA (noun), HELA (adjective), DENNA/DETTA/DESSA (determiners or pronouns) and so on, and these other types are by no comparison the dominant group having this specific cadence in running text. But, in connection with an "infinitive warner" -an auxiliary, or the word ATT -the situation changes dramatically. This can be demonstrated by the following figures: In running text words having the cadance -A represents infinitives in about 30% of the cases. ATT is an infinitive marker (equivalent to "to") in quite exactly 50% of its occurences (the other 50% it is a subordinating conjunction). The conditional probability that the configuration ATT ..-A represents an inflnltve is, however, greater than 99%, provided that characteristic cadences like -ARNA/-ORNA and quantiflers/determiners llke ALLA and DESSA are disregarded (In our system they are marked by SMURF and The Closed Cat, respectively, and thereby "saved" from being classified as infinitives.)Given this, there is almost no overgeneration in IFIGEN, but Swedish allows for split infinitives to some extent. Quite much material can be put in between the infinitive warner and the infinitive, and this gives rise to some undergeneration (presengly). (Similar observations regarding conditional probabilities in configurations of linguistic units has been made by Mats Eeg-Olofson, Lund, 1982) . I' I #p> -p - X " V " F (s) -- V " X ; P => (-)P>3) SUFFIXES (S):l (s) I " V " x 1 X " v " F "_S -E# # S :> /S(-)where I : (admissible) initial cluster, F = final cluster, M = mor-he-m-eTnternal cluster, V = vowel, (s) the "gluon"S (cf. TID~INGSMA~), # = word boundary, (=,>,/,-) = earlier accepted affix segmentations, and , finallay, denotes ordinary concatenation. (It is the enhanced element in each pattern that is tested for its segmentability). Appendix:
null
null
null
null
{ "paperhash": [ "robson|how_to_solve_it" ], "title": [ "How to Solve It" ], "abstract": [ "Have you seen it before? Or have you seen the same problem in a slightly different form? Do you know a related problem? Do you know a theorem that could be useful? Look at the unknown! And try to think of a familiar problem having the same or similar unknwn. Here is a problem similar to yours and solved before. Could you use it? Could you use its result? Could you use its method? Should you introduce some auxiliary element in order to make its use possible? Could you restate the problem? Could you restate it still differently? Go back to definitions." ], "authors": [ { "name": [ "A. Robson", "G. Pólya" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "60680300" ], "intents": [ [] ], "isInfluential": [ false ] }
null
497
0.022133
null
null
null
null
null
null
null
null
f43f055173954c432780af3100a88e74cfa7d144
19854743
null
A Phonological Processor for {I}talian
A computer program for the automatic translation of any text of Italian into naturally fluent synthetic speech is presented. The program, or Phonological Processor (hence FP) maps into prosodic structures the phonological rules of Italian. Structural information is provided by such hierarchical prosodic constituents as Syllable (S), Metrical Foot (HF), Phonological Word (PW), Intonational Group (IG). Onto these structures, phonological rules are applied such as the "letter-tosound" rules, automatic word stress rules,internal stress hierarchy rules indicating secondary stress,external sandhi rules, phonological focus assignment rules, logical focus assignment rules. The FP constitutes also a model to simulate the reading process aloud, and the psycholinguistics and cognitive aspects related will be discussed in the computational model of the FP. At present, Logical Focus assignment rules and the computational model are work in progress still to be implemented in the FP. Recorded samples of automatically produced synthetic speech will be presented at the conference to i11ustrate the functioning of the rules.
{ "name": [ "Delmonte, Rodolfo" ], "affiliation": [ null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
31
8
null
The FP which we shall describe in detail in the following pages, is the terminal section of a system of speech synthesis by rule without vocabulary restrictions, implemented at the Centre of Computational Sonology of the University of Padua. From the linguistic point of view the FP is a model to simulate the operations carried out by an Italian speaker when reading aloud any text. To this end, the speaker shall use the rules of his internal grammar to translate graphic signs into natural speech. These rules wi11 have to be implemented in the FP, together with a computational mechanism simulating the psychological end cognitive functions of the reading process.At the phonological level the FP has to account for low level or segmental phenomena, and high level or suprasegmental ones. The former are represented by three levels of structure, that is S, MF, PW and are governed by phonological rules which are meant to render the movements of the vocal tract and the coarticulatory effects which occur regularly at word level and at word boundaries. The latter are represented by one level of structure, the IG, and are governed by rules which account for long range phenomena like pitch contour formation, intonation centre assignment, pauses. In brief, the rules that the FP shall have to apply are the following: i. transcription from grapheme to nphoneme", including the most regular coarticulatory and allophonic phenomena of the It~dian language; ii. automatic word stress assignment, including all the most frequent exceptions to the rules as well as individuation of homographs, which are very common in Italian; iii. internal word stress hierarchy, with secondary stres/es assignment, individuation of unstressed dipththongs, triphthongs, hiatuses; iv. external sandhi rules, operating at word boundaries and resulting in stress retraction, destressing, stress hierarchy modification, elision by assimilation and other phenomena; v. destressing of functional words listed in a table lookup; vi. pauses marked off by punctuation; pauses deriving from a count of PWs; pauses deriving from syntactic structural phenomena; comma intonation marking of parentheticals and similar structures; vii. rules to restructure the IG when too long -more than ?PWs, or too short -less than 5 PWs; viii. Focus Assignment Rules or FAR, which at first mark Phonological Focus, or intonation centre dependent on lexical and phonologically determined phenomena; ix. FAR which mark Logical Focus or intonation centre dependent on structurally determined phenomena. From a general computational point of view,the FP operates bottom-up to apply low level rules, analysing each word at a time until the PW structure is reached; it operates top-down to apply high level rules and to build the higher structure, the IG.As far as phonematic transcription of Italian texts is concerned, there seems to be no such difficulties as for English. In fact "letter-to-sound" rules are only a few and quite straightforward to be described. There are a number of exceptions and counterexceptions to the rules which have to be specified, but no dictionary lookup seems to be needed. What creates the main difficulties are digraphs and trigrapbs which are ambiguous in that they can render both stops and palatals; some of the decisions concerning trigraphs must be taken after stress has been assigned by word stress rules. The following graphemes have been transcribed into symbols denoting "phonetic elements": K = CH, C+A,+O,+U KK = CCH, CC+A,+Os+U --~ /k/ % = CI, rE, CI.Vowel %% = CCI, CCE, COl+Vowel ---> It~l J = GI, GE, GI÷Vowel JJ = GGI, GGE, GGI+Vowel ---> /03/ / = SCI,SCE,SCI+Vowel ---> /S/ < = GLI,GLE,GLI+Vowel ---> /~/ > = GN+Vowel ---> /3~/ X = Voiced S XX = Geminate S ---> /z/ & = Voiced Z && = Geminate Z ---> /dz/ And here are some exceptions: GLICINE, ANGLIA, GEROGLIFICO where GL = /gl/ not //./ FARMACIA, LUCIAwhere Cl : ItIil not It~l BUGIA, AEROFAGIA, NOSTALGIA whore GI = /d~i/ not /d3/ SCIA where SCI = /$i/ not /S/ Here below we include the flowchart of the phonological rules for the transcription of graphemes S and Z which, as we said, have both voiced/unvoiced phonemes. As it can be easily seen, the two graphemes have been treated together by the same set of rule operating conjunctively: thus a remarkable economy and simplicity has resulted; as to the theoretical import of using one and thesame algorithm, it has been shown that voiced S/Z decisions obey to similar underlying phonological rules.At this point the FP shall have to be provided of rules which transform one or more PWs joining them into an IG as well as of rules which assign the intonation centre of the utterance.The two operations are dependent on Rule of IG construal and on Focus Assignment Rules or FAR. IG Construal Rules should intuitively build well formed IGs. General well-formedness conditions could be established so that phonological facts reflecting performance limitations as well as syntactic and semantic phenomena can be adequately taken into account. These conditions are as follows: CONDITIONS A. determined by intrinsic characteristics of the functioning of memory and of the articulatory apparatus which impose restrictions on the length of an IG -length is defined in terms of the number of constituents, i.e. PWs, to be packed into an IG; this number could vary with the speaking rate and other performance parameters which are strictly related to temporal and spatial limitations of the language faculty; CONOITIONS B. determined by the need to transmit into an IG chunks of conceptual and semantic information concluded in itself and related to the rules of the internal grammar. Construal Rules referring to Conditions A. will first base their application on punctuation, assigning main pauses for each comma, fu11-stop, colon, semi-colon detected in the text.Restructuring may then take place according to the number of constituents present in each IG; if less than three, the IG is too small to stand on its own, and it will be joined to the preceding one; if more than seven PWs, and the utterance is not yet ended, two IGs wi11 result according to phrase structure analysed by the grammar component, or provisionally by contextual information based on syntactic category labels, and on the presence of functional words which are regarded as proclitics and should be joined to the first following PW.To satisfy Condition B. phonological information is insufficient; syntactic and semantic information shall have to be supplied to the FP. The theoretical proposal which,in our opinion will suit best our performance oriented processor is the lexical functional one, diffusedly discussed in Bresnan (1978 Bresnan ( ,1980 Bresnan ( , 1982 , Kaplan & Bres.,an (1981), G~; oar (1980 . The lexical functional component is made up by two subcomponents: I. a lexicon, where each entry is completely specified and has associated subcategorization features; lexical items subcategorize for such universal functions as SUBJECT, OBJECT and so on, and not for constituent structure categories; lexical items exert selectional restrictions on a subset of their subcategorized functions; the predicate argument structure of a lexical item lists the arguments for which there are selectional restrictions. Each lexical item includes a lexical form which pe!rs arguments with functions, as well as the grammatical function assignment which lists the syntactically ;uFcategorized functions. context-free rules to generate syntactlc constituent structures. The combination of the ~wo descriptions will result in a constituent structure and a functional structure which represent formally the grammatical relations of the utterance analysed in terms of universal functions. Functional relations intervening between predicate argument structure and adjuncts or complements are determined by a theory of control which is an integral part of the lexical functional grammar. At this point, we can formulate the following RULES OF IG CONSTRUAL 1. Constituents moved .by dislocations, clefting, extraposittons, and raising, obligatory form at least one IG (for the exceptions see Oelmonte, 1983); 2. Starting with the first PW of an utterance, join into one IG all PWsuntil you reach: 2.1 the Verb, in Wh-questions, and imperatives; 2.2 the last element functionally controlled by a VP, i.e. an argument or a subordinate clause; complements or adjuncts functionally controlled by the Subject of the Object; 2.3 the last element anaphorically controlled by a supraordinated clause where the matrix Subject appears, control is expressed at functional level by thematic restrictions. In this way, pauses will be assigned to the most adequate sites taking into account both performance and structural restrictions.
null
null
It is our opinion 'that Italian speakers do not use directly morpho-syntactic information to assign word stress, but an ordered set of phonological rules to lexical items completely specified in a lexicon, together with some morphological information -relatively only to a subclass of word types; syntactic category information is limited to the verb class. In other words, Italian is not a free-stress language, as diffusedIy discussed in Delmonte (1981) . Speakers analyse fully specifies lexical items by blocks of word stress rules ordered sequentially, which address different types of words according to syllable structure. Words are made to enter each rule block disjunctively, that is each word either enters a block and receives stress, or is passed on to the next block. Exceptions are processed first. No word can be sent back to /&/ iv. BLOCK III deals with trisyllabic words and with all words ending with -ERVowe1#, in which stress may result on the penultimate syllable if exception, and on the antepenult if regular; v. BLOCK IV deals with all words with more than 3 syllables; vi. BLOCK V with further subroutines, deals with words either ending with a syllable containing more than one vowel, or with more than one vowel in penultimate syllable -biphonematic, trtphonematic or ~etraphonematic vowel groups may result in diphthong, triphthong, or hiatuses like "bugia", • acciaio n, "aiuole". Word stress rules like Rule I take into account a series of phonotactic conditions as well as the syntactic category of verb which is essential to the treatment of homographs and to word stress assignment. In fact, Italian is a language very rich in homographs such as "'ambito -am'bito n, "'aprilea'prile" and so on. Usually, by varying the position of stress also the syntactic category will vary. Such words are included in a table lookup and syntactic category is decided according to contextual information. Another class of homographs, belonging this time to the one and same syntactic category, is made up by such words as "ri'cordati -ricor'dati n, "im'picciatiimpi'cciati", which are treated also according to context-, [ ai . 1 / I:'lvJ< > } ....... C,< + 8'~/ ~e V, --> [1 stres~ I RULE I.ual information and to the position they occupy in the utterance. If they come in first position or after a pause, it is assumed that they are cliticized imperatives and stress is assigned to the antepenultimate syllable; if they do not have that position in the utterance and an unstressed word precedes them,they are treated as past participles and stress is assign. ed to the penultimate syllable (See Fig.2 ).These rules take mainly decisions about secondary stress assignment and also about an adequate definition of all unstress. ed syllables preceding and following the stressed one. To assign secondary stress the FP builds up the MP structure. This is done by counting the number of syllables preceding the stressed one. The rule states that the FP has to alternate one unstressed syllable before each primary or secondary stressed Restructuring may result in words with three or more than three syllables before the primary stressed one, as in: "f~lici'ta" "aut~ntici'ta" "artificiali'ta" "fot6gra'fare" "ctnem~to'grafico" "matem~tica'mente" "rappres~ntativa'mente" "utilltar]stica'mente" "preclpitevollssimevol'mente" According to the number of syllables, two unstressed syllables may precede or follow the secondary stressed one. The Restructuring Rule for the. MF takes into account performance facts which require that the number of secondary stressed syllables cannot be more than two when speaking at normal rate, but also that no more than three unstressed syllables may alternate stressed ones. To produce particular emphasis, i.e. if the word constitutes in itself an utterance, there may be obviously an increase to three secondary stresses in the same word or even to four as in "precipit~vol]ssim~vol'mente'. This fact will slow down the speaking rate at values -number of syllables per second -which is under the norm, only to suit the speaker's aim to produce emphasis.Up to this point, low level rules have built PW by stress ing some words and destressing some other words which have become proclitics and are joined to the first stressed word on their right to build a PW as in "della nostra parte" (on our side). High level rules localize punctuation pauses and start to apply external sandhi rules, which may elide a vowel, as in "la famigli~ ~gnelli", "ii mar~ ~ molto agitato" (RULE II); or they may produce schwa-like vowels as in "hann~nteresse", "~ incredibile" (RULE III); retract primary stress as in "'dottor m 'Romolo", "'ingegner 'Rossi" (RULE IVa/b). In the latter case, stress rules have to move back primary stress and to unstress the remaining syllables. It is essential to apply these rules in this phase, because intonation centre may only be assigned to primary stressed syllables: exceptions are represented either by auxillaries which can assume the role of lexical verbs as in "oggi non ci sono" (today I'm not there), nho chie° sto ma non ce l'hanno" (I asked but they haven't got it); or by clitics and adjectives which can become pronouns as in "non ci vengo con re" (I don't come with you), "preferisco quella" (I prefer that one). V ~@/--~ [+] ~We can distinguish between two kinds of FAR, marked and unmarked ones. Unmarked FAR are dependent on phonological and lexical information and give rise to Phonological Focus; marked FAR are dependent on structural information and give rise to Logical Focus (See Gueron, 1980) . Phonological information is used to account for utterances such as simple declaratives, imperatives, wh-questions, yes/ no question, echo questions, where IGs can be built without structural information and the Nuclear Stress Rule can be made to apply in a straightforward way. The Nuclear Stress Rule (see Chomsky & Halle, 1968), can be reformulated as follows: "within an IG reduce to secondary stresses all primary stresses except the one farthest to the right n, as in: 2 ? 2 3 3 1 (1) Jack studies secondary education. which is derived from an underlying representation where word stress is assigned by phonological word stress rules, 1 1 1 2 2 1 (2) Jack studies secondary education. The NSR for English works in the same way for Italian, as in: 2 3 1 2 2 3 1 (3) NeIia scuola superiore, Ginrgiu non studia a sufficienza. lexical information is required to label verbs, and is passed on to the phonological component in order to assign focus to wh-questions and imperatives as in: F (4) Che tipo di libri scrive la persona che hal salutatn ieri? F (5) Smettila di far tutto quel baccano quando leggo un libro. Lexical information is also essential in order to spot logical operators which induce emphatic intonation and attract the intonation centre of the utterance in their scope, usually shifting it to the left. These lexical items are words such as NO, MORE, MUCH, ALL, ALSO, ONLY, [00 etc. (see Jackendoff, 1972) , which modify the semantic import of the utterance and attract the intonation centre to the first PW in their scope; or in case they modify the whole utterance, they move the focus to the following proposition, as in: F (6) Anche Giorgio racconter~ una bella storia. F (7) Gli studenti hanno fatto multi esami nella sessione estiva. (8) I1 bandito non ha ucciso il poliziotto, ma la persona alle F sue spalle. F (8a) I1 bandito non ha ucciso il poliziottOo A second set of FAR, the marked ones, shall assign Logical focus according to structural information. This time the FP shall have to be supplied by syntactic and functional information relatively to those constituents which have been displaced and have been moved to the left. This information is derived from the augmentation which is worked on the context-free c-structure grammar of the lexical functional component, by means of the functional description which serves as an intermediary between c-structure and the f-structure. Long distance phenomena like questions, relatives, clefting, subject raising extrapositions and so on are easily spotted by the use of variables which can represent both immediately dominated metavariabias -specified as subcategorization features in the lexiconand bounded domination metavariables, the nodes to which they will be attached are farther away in the c-structure, and are empty in f-structure representation. Focus is assigned to the OBJECT argument of the verb as in: F (g) John has some books to read. F 10 (r) r (17) A Maria $ piaciuta la proposta chele ha lasciato Gino. Focus marked (F) is optional and emphatic, but it is still different from focus marking in the corresponding English utterance (see Stockwell, 1972) . No provision is made as yet for FAR meant to account for discourse level phenomena, knowledge of the world variables, cotextual rather than cQntextual variables, which operate beyond and across sentence and utterance boundaries. At this level, coreference between two constituents shall have to be determined by synonymous items~ and synonymity calls for knowledge of the world, text level analysis which is not available in a strictly formal system of rules. Examples to this point is the following: F F (18) [onight the children have been really nasty, so I scolded the bastards.where focus is assigned to the verb instead of the NP OBJECT final because the latter is epithet of or synonymous with the NP OBJECT of the supraordinated proposition. We can thus formulate the following: FOCUS ASSIGNMENT RULES 1. Ouestions 1.1 in wh-questions focus is assigned to the Verb;adverbials and other adjuncts are joined to the Verb and receive fOCUS; 1.1.1 according to the functional roles assumed by the arguments of the verb, focus can be assigned to the NP argument acting as Agent SUBJECT; 1.1.2 if extrapositions of PP from NP are in act, or a question word like "perch," is present, focus is assigned to the PP; 1.2 in yes/no question and echo questions, assign Focus phonologically; 2. Imperatives Focus is assigned to the Verb according to predicate argument structure; adjuncts are joined to the Verb and receive focus; 3. Oeclaratives 3.1 if there are arguments displaced to the left of the SUBJECT, focus will be assigned to the last constituent farthest to the right by NSR; topicalizations, clefting and some kinds of extraposition attract focus to the displaced argument; ).2 if there are propositional complements, Focus will be assigned again by NSR; 3.3 parentheticals, appositives, non-restrictive relatives will be assigned comma intonation; 3.4 with multiple embedded structures, focus assignment is conditioned by the presence of a lexical SUBJECT non anaphorically controlled by the SUBJECT of a supraordinated proposition; if so, more IGs will be built and more than one focus will result.So far, we have described the rules of which the#P is equipped. We shall now deal with the psycholinguistic and cognitive aspects of the FP which, as we said at the beginning, is a model to simulate the process of reading aloud any text. From the previous description, it would seem that a speaker analyses the utterance proceeding at first bottom-up, until all low level rules have been applied to the structure of PW; subsequently, he skould apply high level rules and he should build up IGs operating top-down.In fact, the two procedures will have to interact at certain points of the utterance so that both low and high level rules will be applied contemporarily and fluent reading aloud will result. Whereas the speaker applies low level rules each time the graphic boundary of a word is reached, to apply high level rules he will have to wait for the end of an IG, which could be determined phonologically or by lexicel functional information.Intuitively, as he proceeds in the reading process, the speaker will stress open class words and destress closed class ones; he will assign the internal stress hierarchy, and at the same time he will look for the most adequate sites to assign main pauses; he will apply external sandhl rules, modifying, if required, the previous internal stress hierarchy; he will build up pitch contour according to the intonational typology appropriate to the utterance he is producing; intonation centre may result shifted to the left if he encounters logical operators, or to the end of the utterance, provided that it is not a complex proposition with embedded and subordinate structures in it.To carry out such an interchange of rule application between the two levels of analysis of the utterance the FP shall have to jump from one level to the other if need be. It will then be provided with a window which enables it to do a look-ahead in order to acquire two kinds of information: the one related to the presence of blanks, or graphic boundaries between words and the other related to the presence of punctuation marks. The window we have devised for the FP enables it to inspect five consecutive words, but not to know which of these words will become the head of a PW or a PW itself, at least not before low level rules will apply. The function of the window is then limited to the individuation of possible sites for punctuation pauses. But this is also what a reader will probably do while reading the text: as a matter of fact, he will surely want to know how may graphic words are left before the end of the utterance is reached. Graphic information provided by the window is vital then both for low level and high level rules application.As far as low level rules are concerned, the local bottomup procedure is well justified since the reader will want to know first if the word eods with a graphic stress mark, assigning word stress immediately; if this is not the case, he will turn to the penultimate syllable, which is the site where Italian word stress assignment is decided, and he will carry out syllable count if needed. Word stress rule will apply and internal stress hierarchy will be assigned.The main decision to be taken before high level rules may start to apply regards pauses. As we said before, visual information may guide the reader together with phonological decisions previously taken. But quantitative count of words still left to process is only the first criterion, which shall have to be confirmed by qualitative analysis on a structural level. Structurally assigned pauses shall have to account for subordinate, coordinate propositions as well as embedded ones.Where as comma intonation will have to be assigned to appositives, parentheticals and non restrictive clauses, subordinate propositions may be assigned Focus. Graphic information -the presence of one or two commas in the utterance -may thus receive two completely different interpretations: the FP shall have to individuate subordinate clauses which are usually preceded by adverbials, linkers or conjuncts such as SE, OUANDO, SEBBENE, PERCHE', BENCHE', etc. which cause temporary information storage and a suspension of RAF application. Focus goes to the subordinate only if it comes at the beginning of the utterance and it is not a proposition of the kind of concessives, consecutives, conditionals, adversatives which are easily detected from the kind of conjunct introducing them.As far as embedded clauses are concerned, waiting for the lexical functional component to be activated, the FP operates only through the individuation of verbs and of complementizers In particular, the presence of "che" may induce a pause only if the embedded clause is right-branching. Completives, like infinitives and indirect questions, as well as restrictive clauses do not require a pause unless a lexica] subject is present (See Fig. 3) We said at the beginning that the FP is the terminal section of a system of synthesis by rule; we also said that the performance oriented apparatus of phonological rules are meant to simulate the movements of the vocal tract of a speaker reading aloud any Italian text. To bring the FP as close as possible to the linguistic realization process we have undertaken experimental work in order to detect the characteristics of normal intonational and accentual phenomena of the process of reading aloud. Ten speakers have repeated ~ times long utterances like the one showed in Figs. ~a/b. We measured the intan- top-do~ pmcedme sity curve and the F curve by means of a mingograph; durations where measured on an oscilloscope by means of a computer program scanning each 8 ms of the sound wave. Acoustic data were very consistent, particularly the duration and the intensity ones so that they were implemented in the speech synthesizer; perception tests demonstrated that both intelligibility and naturalness were remarkably improved. We include in Fig. 5 the phonological structure of the utterance analysed, which is built according to the construal rules reported in the paper (see also Nespor & Vogel, 1982; Selkirk, 1980
null
Main paper: the phonematic transcription: As far as phonematic transcription of Italian texts is concerned, there seems to be no such difficulties as for English. In fact "letter-to-sound" rules are only a few and quite straightforward to be described. There are a number of exceptions and counterexceptions to the rules which have to be specified, but no dictionary lookup seems to be needed. What creates the main difficulties are digraphs and trigrapbs which are ambiguous in that they can render both stops and palatals; some of the decisions concerning trigraphs must be taken after stress has been assigned by word stress rules. The following graphemes have been transcribed into symbols denoting "phonetic elements": K = CH, C+A,+O,+U KK = CCH, CC+A,+Os+U --~ /k/ % = CI, rE, CI.Vowel %% = CCI, CCE, COl+Vowel ---> It~l J = GI, GE, GI÷Vowel JJ = GGI, GGE, GGI+Vowel ---> /03/ / = SCI,SCE,SCI+Vowel ---> /S/ < = GLI,GLE,GLI+Vowel ---> /~/ > = GN+Vowel ---> /3~/ X = Voiced S XX = Geminate S ---> /z/ & = Voiced Z && = Geminate Z ---> /dz/ And here are some exceptions: GLICINE, ANGLIA, GEROGLIFICO where GL = /gl/ not //./ FARMACIA, LUCIAwhere Cl : ItIil not It~l BUGIA, AEROFAGIA, NOSTALGIA whore GI = /d~i/ not /d3/ SCIA where SCI = /$i/ not /S/ Here below we include the flowchart of the phonological rules for the transcription of graphemes S and Z which, as we said, have both voiced/unvoiced phonemes. As it can be easily seen, the two graphemes have been treated together by the same set of rule operating conjunctively: thus a remarkable economy and simplicity has resulted; as to the theoretical import of using one and thesame algorithm, it has been shown that voiced S/Z decisions obey to similar underlying phonological rules. word stress rules: It is our opinion 'that Italian speakers do not use directly morpho-syntactic information to assign word stress, but an ordered set of phonological rules to lexical items completely specified in a lexicon, together with some morphological information -relatively only to a subclass of word types; syntactic category information is limited to the verb class. In other words, Italian is not a free-stress language, as diffusedIy discussed in Delmonte (1981) . Speakers analyse fully specifies lexical items by blocks of word stress rules ordered sequentially, which address different types of words according to syllable structure. Words are made to enter each rule block disjunctively, that is each word either enters a block and receives stress, or is passed on to the next block. Exceptions are processed first. No word can be sent back to /&/ iv. BLOCK III deals with trisyllabic words and with all words ending with -ERVowe1#, in which stress may result on the penultimate syllable if exception, and on the antepenult if regular; v. BLOCK IV deals with all words with more than 3 syllables; vi. BLOCK V with further subroutines, deals with words either ending with a syllable containing more than one vowel, or with more than one vowel in penultimate syllable -biphonematic, trtphonematic or ~etraphonematic vowel groups may result in diphthong, triphthong, or hiatuses like "bugia", • acciaio n, "aiuole". Word stress rules like Rule I take into account a series of phonotactic conditions as well as the syntactic category of verb which is essential to the treatment of homographs and to word stress assignment. In fact, Italian is a language very rich in homographs such as "'ambito -am'bito n, "'aprilea'prile" and so on. Usually, by varying the position of stress also the syntactic category will vary. Such words are included in a table lookup and syntactic category is decided according to contextual information. Another class of homographs, belonging this time to the one and same syntactic category, is made up by such words as "ri'cordati -ricor'dati n, "im'picciatiimpi'cciati", which are treated also according to context-, [ ai . 1 / I:'lvJ< > } ....... C,< + 8'~/ ~e V, --> [1 stres~ I RULE I.ual information and to the position they occupy in the utterance. If they come in first position or after a pause, it is assumed that they are cliticized imperatives and stress is assigned to the antepenultimate syllable; if they do not have that position in the utterance and an unstressed word precedes them,they are treated as past participles and stress is assign. ed to the penultimate syllable (See Fig.2 ). internal word stress hierarchy: These rules take mainly decisions about secondary stress assignment and also about an adequate definition of all unstress. ed syllables preceding and following the stressed one. To assign secondary stress the FP builds up the MP structure. This is done by counting the number of syllables preceding the stressed one. The rule states that the FP has to alternate one unstressed syllable before each primary or secondary stressed Restructuring may result in words with three or more than three syllables before the primary stressed one, as in: "f~lici'ta" "aut~ntici'ta" "artificiali'ta" "fot6gra'fare" "ctnem~to'grafico" "matem~tica'mente" "rappres~ntativa'mente" "utilltar]stica'mente" "preclpitevollssimevol'mente" According to the number of syllables, two unstressed syllables may precede or follow the secondary stressed one. The Restructuring Rule for the. MF takes into account performance facts which require that the number of secondary stressed syllables cannot be more than two when speaking at normal rate, but also that no more than three unstressed syllables may alternate stressed ones. To produce particular emphasis, i.e. if the word constitutes in itself an utterance, there may be obviously an increase to three secondary stresses in the same word or even to four as in "precipit~vol]ssim~vol'mente'. This fact will slow down the speaking rate at values -number of syllables per second -which is under the norm, only to suit the speaker's aim to produce emphasis. external sandhi rules: Up to this point, low level rules have built PW by stress ing some words and destressing some other words which have become proclitics and are joined to the first stressed word on their right to build a PW as in "della nostra parte" (on our side). High level rules localize punctuation pauses and start to apply external sandhi rules, which may elide a vowel, as in "la famigli~ ~gnelli", "ii mar~ ~ molto agitato" (RULE II); or they may produce schwa-like vowels as in "hann~nteresse", "~ incredibile" (RULE III); retract primary stress as in "'dottor m 'Romolo", "'ingegner 'Rossi" (RULE IVa/b). In the latter case, stress rules have to move back primary stress and to unstress the remaining syllables. It is essential to apply these rules in this phase, because intonation centre may only be assigned to primary stressed syllables: exceptions are represented either by auxillaries which can assume the role of lexical verbs as in "oggi non ci sono" (today I'm not there), nho chie° sto ma non ce l'hanno" (I asked but they haven't got it); or by clitics and adjectives which can become pronouns as in "non ci vengo con re" (I don't come with you), "preferisco quella" (I prefer that one). V ~@/--~ [+] ~ ig construal rules: At this point the FP shall have to be provided of rules which transform one or more PWs joining them into an IG as well as of rules which assign the intonation centre of the utterance.The two operations are dependent on Rule of IG construal and on Focus Assignment Rules or FAR. IG Construal Rules should intuitively build well formed IGs. General well-formedness conditions could be established so that phonological facts reflecting performance limitations as well as syntactic and semantic phenomena can be adequately taken into account. These conditions are as follows: CONDITIONS A. determined by intrinsic characteristics of the functioning of memory and of the articulatory apparatus which impose restrictions on the length of an IG -length is defined in terms of the number of constituents, i.e. PWs, to be packed into an IG; this number could vary with the speaking rate and other performance parameters which are strictly related to temporal and spatial limitations of the language faculty; CONOITIONS B. determined by the need to transmit into an IG chunks of conceptual and semantic information concluded in itself and related to the rules of the internal grammar. Construal Rules referring to Conditions A. will first base their application on punctuation, assigning main pauses for each comma, fu11-stop, colon, semi-colon detected in the text.Restructuring may then take place according to the number of constituents present in each IG; if less than three, the IG is too small to stand on its own, and it will be joined to the preceding one; if more than seven PWs, and the utterance is not yet ended, two IGs wi11 result according to phrase structure analysed by the grammar component, or provisionally by contextual information based on syntactic category labels, and on the presence of functional words which are regarded as proclitics and should be joined to the first following PW.To satisfy Condition B. phonological information is insufficient; syntactic and semantic information shall have to be supplied to the FP. The theoretical proposal which,in our opinion will suit best our performance oriented processor is the lexical functional one, diffusedly discussed in Bresnan (1978 Bresnan ( ,1980 Bresnan ( , 1982 , Kaplan & Bres.,an (1981), G~; oar (1980 . The lexical functional component is made up by two subcomponents: I. a lexicon, where each entry is completely specified and has associated subcategorization features; lexical items subcategorize for such universal functions as SUBJECT, OBJECT and so on, and not for constituent structure categories; lexical items exert selectional restrictions on a subset of their subcategorized functions; the predicate argument structure of a lexical item lists the arguments for which there are selectional restrictions. Each lexical item includes a lexical form which pe!rs arguments with functions, as well as the grammatical function assignment which lists the syntactically ;uFcategorized functions. context-free rules to generate syntactlc constituent structures. The combination of the ~wo descriptions will result in a constituent structure and a functional structure which represent formally the grammatical relations of the utterance analysed in terms of universal functions. Functional relations intervening between predicate argument structure and adjuncts or complements are determined by a theory of control which is an integral part of the lexical functional grammar. At this point, we can formulate the following RULES OF IG CONSTRUAL 1. Constituents moved .by dislocations, clefting, extraposittons, and raising, obligatory form at least one IG (for the exceptions see Oelmonte, 1983); 2. Starting with the first PW of an utterance, join into one IG all PWsuntil you reach: 2.1 the Verb, in Wh-questions, and imperatives; 2.2 the last element functionally controlled by a VP, i.e. an argument or a subordinate clause; complements or adjuncts functionally controlled by the Subject of the Object; 2.3 the last element anaphorically controlled by a supraordinated clause where the matrix Subject appears, control is expressed at functional level by thematic restrictions. In this way, pauses will be assigned to the most adequate sites taking into account both performance and structural restrictions. focus assignment rules (far): We can distinguish between two kinds of FAR, marked and unmarked ones. Unmarked FAR are dependent on phonological and lexical information and give rise to Phonological Focus; marked FAR are dependent on structural information and give rise to Logical Focus (See Gueron, 1980) . Phonological information is used to account for utterances such as simple declaratives, imperatives, wh-questions, yes/ no question, echo questions, where IGs can be built without structural information and the Nuclear Stress Rule can be made to apply in a straightforward way. The Nuclear Stress Rule (see Chomsky & Halle, 1968), can be reformulated as follows: "within an IG reduce to secondary stresses all primary stresses except the one farthest to the right n, as in: 2 ? 2 3 3 1 (1) Jack studies secondary education. which is derived from an underlying representation where word stress is assigned by phonological word stress rules, 1 1 1 2 2 1 (2) Jack studies secondary education. The NSR for English works in the same way for Italian, as in: 2 3 1 2 2 3 1 (3) NeIia scuola superiore, Ginrgiu non studia a sufficienza. lexical information is required to label verbs, and is passed on to the phonological component in order to assign focus to wh-questions and imperatives as in: F (4) Che tipo di libri scrive la persona che hal salutatn ieri? F (5) Smettila di far tutto quel baccano quando leggo un libro. Lexical information is also essential in order to spot logical operators which induce emphatic intonation and attract the intonation centre of the utterance in their scope, usually shifting it to the left. These lexical items are words such as NO, MORE, MUCH, ALL, ALSO, ONLY, [00 etc. (see Jackendoff, 1972) , which modify the semantic import of the utterance and attract the intonation centre to the first PW in their scope; or in case they modify the whole utterance, they move the focus to the following proposition, as in: F (6) Anche Giorgio racconter~ una bella storia. F (7) Gli studenti hanno fatto multi esami nella sessione estiva. (8) I1 bandito non ha ucciso il poliziotto, ma la persona alle F sue spalle. F (8a) I1 bandito non ha ucciso il poliziottOo A second set of FAR, the marked ones, shall assign Logical focus according to structural information. This time the FP shall have to be supplied by syntactic and functional information relatively to those constituents which have been displaced and have been moved to the left. This information is derived from the augmentation which is worked on the context-free c-structure grammar of the lexical functional component, by means of the functional description which serves as an intermediary between c-structure and the f-structure. Long distance phenomena like questions, relatives, clefting, subject raising extrapositions and so on are easily spotted by the use of variables which can represent both immediately dominated metavariabias -specified as subcategorization features in the lexiconand bounded domination metavariables, the nodes to which they will be attached are farther away in the c-structure, and are empty in f-structure representation. Focus is assigned to the OBJECT argument of the verb as in: F (g) John has some books to read. F 10 (r) r (17) A Maria $ piaciuta la proposta chele ha lasciato Gino. Focus marked (F) is optional and emphatic, but it is still different from focus marking in the corresponding English utterance (see Stockwell, 1972) . No provision is made as yet for FAR meant to account for discourse level phenomena, knowledge of the world variables, cotextual rather than cQntextual variables, which operate beyond and across sentence and utterance boundaries. At this level, coreference between two constituents shall have to be determined by synonymous items~ and synonymity calls for knowledge of the world, text level analysis which is not available in a strictly formal system of rules. Examples to this point is the following: F F (18) [onight the children have been really nasty, so I scolded the bastards.where focus is assigned to the verb instead of the NP OBJECT final because the latter is epithet of or synonymous with the NP OBJECT of the supraordinated proposition. We can thus formulate the following: FOCUS ASSIGNMENT RULES 1. Ouestions 1.1 in wh-questions focus is assigned to the Verb;adverbials and other adjuncts are joined to the Verb and receive fOCUS; 1.1.1 according to the functional roles assumed by the arguments of the verb, focus can be assigned to the NP argument acting as Agent SUBJECT; 1.1.2 if extrapositions of PP from NP are in act, or a question word like "perch," is present, focus is assigned to the PP; 1.2 in yes/no question and echo questions, assign Focus phonologically; 2. Imperatives Focus is assigned to the Verb according to predicate argument structure; adjuncts are joined to the Verb and receive focus; 3. Oeclaratives 3.1 if there are arguments displaced to the left of the SUBJECT, focus will be assigned to the last constituent farthest to the right by NSR; topicalizations, clefting and some kinds of extraposition attract focus to the displaced argument; ).2 if there are propositional complements, Focus will be assigned again by NSR; 3.3 parentheticals, appositives, non-restrictive relatives will be assigned comma intonation; 3.4 with multiple embedded structures, focus assignment is conditioned by the presence of a lexical SUBJECT non anaphorically controlled by the SUBJECT of a supraordinated proposition; if so, more IGs will be built and more than one focus will result. the computational mechanism: So far, we have described the rules of which the#P is equipped. We shall now deal with the psycholinguistic and cognitive aspects of the FP which, as we said at the beginning, is a model to simulate the process of reading aloud any text. From the previous description, it would seem that a speaker analyses the utterance proceeding at first bottom-up, until all low level rules have been applied to the structure of PW; subsequently, he skould apply high level rules and he should build up IGs operating top-down.In fact, the two procedures will have to interact at certain points of the utterance so that both low and high level rules will be applied contemporarily and fluent reading aloud will result. Whereas the speaker applies low level rules each time the graphic boundary of a word is reached, to apply high level rules he will have to wait for the end of an IG, which could be determined phonologically or by lexicel functional information.Intuitively, as he proceeds in the reading process, the speaker will stress open class words and destress closed class ones; he will assign the internal stress hierarchy, and at the same time he will look for the most adequate sites to assign main pauses; he will apply external sandhl rules, modifying, if required, the previous internal stress hierarchy; he will build up pitch contour according to the intonational typology appropriate to the utterance he is producing; intonation centre may result shifted to the left if he encounters logical operators, or to the end of the utterance, provided that it is not a complex proposition with embedded and subordinate structures in it.To carry out such an interchange of rule application between the two levels of analysis of the utterance the FP shall have to jump from one level to the other if need be. It will then be provided with a window which enables it to do a look-ahead in order to acquire two kinds of information: the one related to the presence of blanks, or graphic boundaries between words and the other related to the presence of punctuation marks. The window we have devised for the FP enables it to inspect five consecutive words, but not to know which of these words will become the head of a PW or a PW itself, at least not before low level rules will apply. The function of the window is then limited to the individuation of possible sites for punctuation pauses. But this is also what a reader will probably do while reading the text: as a matter of fact, he will surely want to know how may graphic words are left before the end of the utterance is reached. Graphic information provided by the window is vital then both for low level and high level rules application.As far as low level rules are concerned, the local bottomup procedure is well justified since the reader will want to know first if the word eods with a graphic stress mark, assigning word stress immediately; if this is not the case, he will turn to the penultimate syllable, which is the site where Italian word stress assignment is decided, and he will carry out syllable count if needed. Word stress rule will apply and internal stress hierarchy will be assigned.The main decision to be taken before high level rules may start to apply regards pauses. As we said before, visual information may guide the reader together with phonological decisions previously taken. But quantitative count of words still left to process is only the first criterion, which shall have to be confirmed by qualitative analysis on a structural level. Structurally assigned pauses shall have to account for subordinate, coordinate propositions as well as embedded ones.Where as comma intonation will have to be assigned to appositives, parentheticals and non restrictive clauses, subordinate propositions may be assigned Focus. Graphic information -the presence of one or two commas in the utterance -may thus receive two completely different interpretations: the FP shall have to individuate subordinate clauses which are usually preceded by adverbials, linkers or conjuncts such as SE, OUANDO, SEBBENE, PERCHE', BENCHE', etc. which cause temporary information storage and a suspension of RAF application. Focus goes to the subordinate only if it comes at the beginning of the utterance and it is not a proposition of the kind of concessives, consecutives, conditionals, adversatives which are easily detected from the kind of conjunct introducing them.As far as embedded clauses are concerned, waiting for the lexical functional component to be activated, the FP operates only through the individuation of verbs and of complementizers In particular, the presence of "che" may induce a pause only if the embedded clause is right-branching. Completives, like infinitives and indirect questions, as well as restrictive clauses do not require a pause unless a lexica] subject is present (See Fig. 3) acoustic parameters and phonetic 0etail: We said at the beginning that the FP is the terminal section of a system of synthesis by rule; we also said that the performance oriented apparatus of phonological rules are meant to simulate the movements of the vocal tract of a speaker reading aloud any Italian text. To bring the FP as close as possible to the linguistic realization process we have undertaken experimental work in order to detect the characteristics of normal intonational and accentual phenomena of the process of reading aloud. Ten speakers have repeated ~ times long utterances like the one showed in Figs. ~a/b. We measured the intan- top-do~ pmcedme sity curve and the F curve by means of a mingograph; durations where measured on an oscilloscope by means of a computer program scanning each 8 ms of the sound wave. Acoustic data were very consistent, particularly the duration and the intensity ones so that they were implemented in the speech synthesizer; perception tests demonstrated that both intelligibility and naturalness were remarkably improved. We include in Fig. 5 the phonological structure of the utterance analysed, which is built according to the construal rules reported in the paper (see also Nespor & Vogel, 1982; Selkirk, 1980 o. introduction: The FP which we shall describe in detail in the following pages, is the terminal section of a system of speech synthesis by rule without vocabulary restrictions, implemented at the Centre of Computational Sonology of the University of Padua. From the linguistic point of view the FP is a model to simulate the operations carried out by an Italian speaker when reading aloud any text. To this end, the speaker shall use the rules of his internal grammar to translate graphic signs into natural speech. These rules wi11 have to be implemented in the FP, together with a computational mechanism simulating the psychological end cognitive functions of the reading process.At the phonological level the FP has to account for low level or segmental phenomena, and high level or suprasegmental ones. The former are represented by three levels of structure, that is S, MF, PW and are governed by phonological rules which are meant to render the movements of the vocal tract and the coarticulatory effects which occur regularly at word level and at word boundaries. The latter are represented by one level of structure, the IG, and are governed by rules which account for long range phenomena like pitch contour formation, intonation centre assignment, pauses. In brief, the rules that the FP shall have to apply are the following: i. transcription from grapheme to nphoneme", including the most regular coarticulatory and allophonic phenomena of the It~dian language; ii. automatic word stress assignment, including all the most frequent exceptions to the rules as well as individuation of homographs, which are very common in Italian; iii. internal word stress hierarchy, with secondary stres/es assignment, individuation of unstressed dipththongs, triphthongs, hiatuses; iv. external sandhi rules, operating at word boundaries and resulting in stress retraction, destressing, stress hierarchy modification, elision by assimilation and other phenomena; v. destressing of functional words listed in a table lookup; vi. pauses marked off by punctuation; pauses deriving from a count of PWs; pauses deriving from syntactic structural phenomena; comma intonation marking of parentheticals and similar structures; vii. rules to restructure the IG when too long -more than ?PWs, or too short -less than 5 PWs; viii. Focus Assignment Rules or FAR, which at first mark Phonological Focus, or intonation centre dependent on lexical and phonologically determined phenomena; ix. FAR which mark Logical Focus or intonation centre dependent on structurally determined phenomena. From a general computational point of view,the FP operates bottom-up to apply low level rules, analysing each word at a time until the PW structure is reached; it operates top-down to apply high level rules and to build the higher structure, the IG. Appendix:
null
null
null
null
{ "paperhash": [ "marcus|a_theory_of_syntactic_recognition_for_natural_language", "allen|synthesis_of_speech_from_unrestricted_text", "liberman|on_stress_and_linguistic_rhythm", "jackendoff|semantic_interpretation_in_generative_grammar", "chomsky|the_sound_pattern_of_english" ], "title": [ "A theory of syntactic recognition for natural language", "Synthesis of speech from unrestricted text", "On stress and linguistic rhythm", "Semantic Interpretation in Generative Grammar", "The Sound Pattern of English" ], "abstract": [ "Abstract : Assume that the syntax of natural language can be parsed by a left-to-right deterministic mechanism without facilities for parallelism or backup. It will be shown that this 'determinism' hypothesis, explored within the context of the grammar of English, leads to a simple mechanism, a grammar interpreter. (Author)", "For many applications, it is desirable to be able to convert arbitrary English text to natural and intelligible sounding speech. This transformation between two surface forms is facilitated by first obtaining the common underlying abstract linguistic representation which relates to both text and speech surface representations. Calculation of these abstract bases then permits proper selection of phonetic segments, lexical stress, juncture, and sentence-level stress and intonation. The resulting system serves as a model for the cognitive process of reading aloud, and also as a stable practical means for providing speech output in a broad class of computer-based systems.", "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].. The MIT Press is collaborating with JSTOR to digitize, preserve and extend access to Linguistic Inquiry.", "Like other recent work in the field of generative-transformational grammar, this book developed from a realization that many problems in linguistics involve semantics too deeply to be solved insightfully within the syntactic theory of Noam Chomsky's Aspect of the Theory of Syntax. Dr Jackendoff has attempted to take a broader view of semantics, studying the important contribution it makes to the syntactic patterns of English.The research is carried out in the framework of an interpretive theory, that is, a theory of grammar in which syntactic structures are given interpretations by an autonomous syntactic component. The book investigates a wide variety of semantic rules, stating them in considerable detail and extensively treating their consequences for the syntactic component of the grammar. In particular, it is shown that the hypothesis that transformations do not change meaning must be abandoned; but equally stringent restrictions on transformations are formulated within the interpretive theory.Among the areas of grammar discussed are the well-known problems of case relations, pronominalization, negation, and quantifiers. In addition, the author presents semantic analyses of such neglected areas as adverbs and intonation contours; he also proposes radically new approaches to the so-called Crossover Principle, the control problem for complement subjects, parentheticals, and the interpretation of nonspecific noun phrases.", "Since this classic work in phonology was published in 1968, there has been no other book that gives as broad a view of the subject, combining generally applicable theoretical contributions with analysis of the details of a single language. The theoretical issues raised in The Sound Pattern of English continue to be critical to current phonology, and in many instances the solutions proposed by Chomsky and Halle have yet to be improved upon.Noam Chomsky and Morris Halle are Institute Professors of Linguistics and Philosophy at MIT." ], "authors": [ { "name": [ "Mitchell P. Marcus" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Allen" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. Liberman", "A. Prince" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Jackendoff" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Noam Chomsky", "M. Halle" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null ], "s2_corpus_id": [ "6616065", "42813358", "140986621", "61367317", "60457972" ], "intents": [ [], [], [], [ "background" ], [ "background" ] ], "isInfluential": [ false, false, false, false, false ] }
null
497
0.016097
null
null
null
null
null
null
null
null
bcebd689d722e564fec917de61616732a9accedb
5440348
null
Natural Language Input for Scene Generation
In this paper a system which understands and conceptualizes scenes descriptions in natural language is presented. Specifically, the following components of the system are described: the syntactic analyzer, based on a Procedural Systemic Grammar, the semantic analyzer relying on the Conceptual Dependency Theory, and the dictionary.
{ "name": [ "Adorni, Giovanni and", "Di Manzo, Mauro" ], "affiliation": [ null, null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
18
20
null
In this paper a system is presented, which under stands and conceptualizes scenes descriptions in natural language (Italian) and produces simple stat ic images of the scenes. It is part of a larger project that aims at understanding the description of static scenes, reasoning (in case of incompleteness or inconsistency) and dialoguing about them, and finally generating and displaying them.The Input Analyzer (IA) of the system is the most stable end experimented component and it is the topic of this paper. It consists of a Syntactic Analyzer, a Cognitive Data Base (CDB) and a Semantic Interpreter.The syntactic analysis is performed by means of a Procedural Systemic Grammar (PSG) (McCord, 77) . The main characteristics of the PSG parser is that the operation flow is highly structured, since different levels of the analysis are associated to the syntactic units of the sentence. Five processes can be activated (CLAUSE, COMPL.GR, NOUN.GR, ADJ.GR and VERB.GR) devoted to recognize respectively: (i) the sentences, (ii) the propositional phrases, comparatives, quantification and noun phrases, (iii) the components of the noun phrases, (iv) the adjectives and their modifiers, (v) the verb and its modifiers. FEATURE NETWORK:array(FEATURE) of LINK; Each NODE represents s feature identified by its NAME; the ALTERNATE pointer allows the connection in a Circular list of mutually exclusive features as in SHRLDU (Winograd, 72) . Each process gives as output a fragment of the FEATURE NETWORK manipulated to describe the input; this is performed by means of a set of functions which test the presence of a feature in the FEATURE_NETWORK, add and erase features, as described in McCord ('77) . The process is divided into a set of sequential routines,called SLOTs, analyzing the functional components of a Syntactic Unit. In the function:function FILLER(ARGI:PROCESS, ARG2:SETOF_FEATURES):boolean; ARGI activates the appropriate process to fill the caller slot; the second argument of the function selects the set of features to which the called process must be inizialized. This last features-passing mechanism is absent in the original PSG; from our experience, we found it usefull in all the cases in which a choice in a syntactic level is determined by the syperior level or by a more larger context. Thus, for instance, the set of features characterizing a prepositional phrase is determined at the corresponding syntactic level by the preposition and the features of the nominal phrase; but further and not less important selection criteria can be imposed by the verb which is found in the upper level. The output of a simple analysis is shown in Fig.2 ; it gives an idea of the syntactic representation. The choice of PSG is mainly motivated by the possibility of parallel computation. A control structure allowing the parallel computation is: cobegin ... coend; It is a single input-output structure, very usefull to handle alternative choices for the same computational level. In the case of mutually exclusive alternatives only one of the "n" processes activated by a cobegin control structure can end successfully. In the case of not mutually exclusive alternatives, it is still possible to use the cobegin control structure , but it is necessary to define a strategy for the selection of the most suitable alternative when the coend occurs.An experimental implementation in terms of para~ lel computation has been made on a multiprocessor system (Adorni et ai., '79) . Another version of this parser has been implemented in PASCAL (DiManzo et ai.,'79} and a version in FranzLisp is in progress.The organization of knowledge, in this system, is based on a set of THOUGHTs. A THOUGHT is a frame like structure within which new data are interpreted in terms of concepts acquired through previous experience (Minsky,'75), (Schank, Abelson, '77) . Every THOUGHT has a TYPE which determines a set of operations applicable to it.The following predefined types are allowed (Adorni, DiManzo, '83) : -DESCRIPTIVE, that defines the complete description of a physical,abstract,animate or not,object.-PROTOTYPE, that defines the structural part of a physical object in terms of generalized cones (Marr, Nishihara, '78) . An example of definition of simple prototype object is given in Fig.3 . -JOINT, that defines the element of connection between physical objects, in order to build more complex objects or scenes (Fig.4 ).-SPATIALREL, that defines spatial relationships like "on,near,on the left of,..." between objects.All the linguistic relationships like "above,under, behind", and so on, are reduced into quantitative geometrical relationships between the coordinates of some points of the involved objects; this choice is motivated by the possibility of deriving a set of very general inference rules from analytic geometry (Adorni et ai., '82) , (Boggess, '79) , (Boggess, Waltz, '79) . The coordinates of an indefinite point P are given in the form: COORD K OF P (REFERRED_TO A)=H where K is a group of possible coordinates, H a set of values for these coordinates and A is the THOUGHT of the object to which the reference system used is connected. Fig.5 shows the THOUGHT for an use of the preposition "on".A spatialrel type THOUGHT can contain conceptualizations and prototype THOUGHTs; a joint type can contain only its description; a prototype type can contain joint or prototype THOUGHTs or descriptions in terms of generalyzed cones;all these types can be enclosed in a descriptive type which can contain conceptualizations and all the types of THOUGHTs, previously introduced. A descriptive type can include the following fields (Adorni, DiManzo, '83) , (see -POSITION, gives the most common spatial relations between the described object and other objects in standard scenes, in terms of a spatialrel between prototype THOUGHTs; -SUPPORT, contains the indication, in terms of descriptive THOUGHTs, of the objects which are supported in standard situations; -COLOR and MADE, describe the possible set of colors and materials, while WEIGHT contains information about the range of possible weights; -CONTENT, says, in terms of descriptive THOUGHTs, that the normal use of the object is a container for other objects; -DYNAMIC, contains the current expectations about the boundaries of the dimensions of the objects; it can be dinamically updated every time a new object of the same class enters the system's CDB.The Semantic Interpreter of the IA interacts with the Syntactic Analyzer and operates on a set of rules in order to build the concepts a sentence was intended to mean. The output of this module is a Conceptual Dependency Network (Schank, '75) , in which every nominal is substituted by a complex descriptive THOUGHT instantiated from the CDB.Let us illustrate the procedure of analysis considering the following sentence (the translation is word by word in order to reproduce the problems of Italian): (i) "l'uomo dai capelli grigi e' andato a Roma con l'auto di Giuseppe" (the man with the grey hair has gone to Rome with the car of Joseph)The procedure of analysis has several steps:A. Analysis of Words and Simple Phrases During this step the entities which take part into the conceptualization are identified. In fact an indexed identifier Xi is associated to each object referred to in the sentence (each nominal), which points to one or more conceptualizations, contained in the field "descr" of each nominal in the CDB. The adjectives contained in the noun phrases are also analyzed during this step. Each of them adds some conceptualizations which contribute to further individuate the nominal. During this step personal pronouns are identified as:Xi ~=--> ISA(HUMAN) Temporal and local adverbials are also analyzed in this phase in order to assign to the sentence conceptualization a time and place identification according to certain rules described in (Adorni et al., '81) . At the end of this step the sentence (i) is represented as follows:identifier nominal conceptualization Xl uomo (man) Xl <=~ISA(HUMAN) X2 capelli (hair) X2<==>ISA(HAIR) X3 Roma (Rome) X3~=>ISA(CITY) XS<==>NAME(ROME) X4 auto (car) X4<==>ISA(CAR) X5 Giuseppe (Joseph) X5<==>ISA(HUMAN) X5~-->NAME(JOSEPH)The sentence (i) can then be read:(2) "XI da X2 e' andato a X3 con X4 di X5" (XI from X2 is gone to X3 with X4 of X5)The simple phrases of a sentence can either fill conceptual cases of a main conceptualization, thus serving as 'picture producer' (PP), or further ind ! Like "a man is on a chair" viduate a PP. Therefore they can be classified according to whether they modify: a) the nominal that precedes(also not immediately); "i libri di Carlo" ^ (the books of Charles) b) the subject or object independently from their position;"Maria e' andata a Roma con Anna" ^ (Mary has gone to Rome with Ann) c) the action;"Maria e' andata a Roma con la macchina" ^ (Mary has gone to Rome with the car) quires that the structure of the sentence is entirely known and cannot, in any case, be performed before the verb has been analyzed (subject and object are considered type c) modifiers). The modifiers in a), on the contrary, have a local role, limited to the PP they are to modify, and their relation to the sentence structure is marginal. They are, therefore,immediately associated to their corresponding nominals. In (2) "da X2" and "di X5"are of this kind and are consequently linked to X1 and X4 producing:(3) "XI e' andato a X3 con X4" (XI has gone to X3 with X4)In the "descr" field of THOUGHTs Xl and X4 the following information is added:X2 < .... PART OF(X1) X5 <===> OWNERSHIP(X4)The embodying of a modifier creates complex PPs or CLUSTERs. Each CLUSTER has as its HEAD a b) or c) modifier,a conceptual index node modified by the accessory concepts. In our example "l'uomo dai capelli neri", "a Roma", and "con l'auto di Giuseppe" are CLUSTERs, in which the head is always the leftmost nominal.The decision about the embodying of a modifier into its head is related to the classical problem of the placement of PP's. In fact, it is not always the case that a prepositional phrase modifies a conceptual index node; it is often possible that it has to be embodied into another accessory modifier, as in: "il libro dell'uomo dal cappotto blu" At this step "splitting" of a conceptualization often occurs. In the sentence:"Giovanni d~ un colpo a Maria" (lit. John gives a blow to Mary) although two nuclei are present (d~ & colpo),nevertheless the correct interpretation is "Giovanni colpisce Maria" (John hits Mary), instead of "Giovanni trasferisce il possesso dell'oggetto colpo a Maria" (John tansfers the ownership of the object 'blow' to Mary)!!!We have observed that this phenomenon involves conceptualizations based on the primitives of "state", "action", and "spatial relationship" and relies only on the pairs ACTION-STATE, ACTION-SPATIAL RE-LATIONSHIP, and ACTION-ACTION. The regularities ruling the formation of these pairs have been found to depend only upon those conceptual primitives. This keeps the number of rules to be evaluated reasonably small, if compared with the number of CDB entries (~600 entries in the present implementation (Adorni et al., '81) )~ An example will illustrate the mechanism of reduction of the conceptual "splitting" as well as of disambiguation.The pair ACTION-SPATIAL RELATIONSHIP may be represented by: "tirare su il braccio" The compound "tirare su" has the two meanings: -innalzare,(TO(NIL)))) ) X ~ ~ DO == S(Y(CHANGE STATE((FROM(HAPPINESS(N)}) (TO(HAPPINESS(N)))) ) )The context helps disambiguation.In our example, the object of the spatial relationship being a physical object, the first alternative is selected. The rule performs a further control, discovering that the physical object is, in this case, PART OF(HUMAN); the PROPEL primitive is then substituted by the MOVE primitive.The next step performed by the semantic module is the filling of the conceptual cases of the main conceptualization with the THOUGHTs instantiated during the previous steps. Again, standard rules are associated to prepositions and adverbs and hidiosyncrasies are also treated. These rules make use of messages sent by the syntactic component and look at the conceptual syntax of the main conceptualization. Through these rules the cluster"con X4" turns out to be 'instrumental' and the following conceptualization is then produced:(4) X1 .... USE .... OBJ(X4) Since the filler of the instrumental case of the main conceptualization has to be a conceptualization, the rule activated by the "con" modifier fills the instrumental case with (4). In (3), 'a X3' is placed in the destination of the directive case of the main conceptualization, because preposition 'a' is stated to indicate the 'destination' if the main conceptualization contains a PTRANS,PROPEL or MOVE,with empty directive case; otherwise it indicates 'state'. "Andare a Roma" is thus distinguished from "essere a Roma" (to be in Rome). The result, for our example, is: XI< .... PTRANS~---OBJ(XI)~---DIR((FROM(NIL)) (TO (IN X3) ) )The directive case,as shown in the above example is not simply filled with a md; it is filled with a "spatial_relationship-md" pair. This is a general rule for our system, emphasizing the change of coot dinates caused by an action. In our example this means that the primitive PTRANS has moved the object to a point whose coordinates are defined with in the city of Rome. The result of the analysis of (I) is given in Fig.9 . The process of semantic interpretation is applied to every clause in the sentence, identified by a verb or a noun indicating an action. Segmentation into such clauses or nominalized clauses is obviously performed by the syntactic component, which has also non-standard rules for specific classes of (modal) verbs like: dovere (must),volere (to want),potere (can),incominciare (to start) .... These verbs constitute a single main conceptualization together with the embedded infinitive. Simple composition rules have been defined to combine the meaning of clauses (sentences). Thus for conjunction, as in "si alzo',si mise il cappello eapri' la porta"(he stood up,put on its hat and opened the door) the main conceptualizations associated to every proposition are connected by an 'and' relationship.(si alzo') ......................... T1 and (si mise il cappello) T2 >TI and (apri' la porta) T3 >T2A time indication is also associated to every main conceptualization to emphasize the execution order of every action. Conceptual analysis of each single clause (sentence) is activated by this top level structure and at the end the resulting conceptualizations are linked one to the other.In this paper a system for understanding a natural language input to a scene generator has been described. It makes use of a conceptual dependency semantic model, substantially modified in as much as syntax is kept apart from semantic interpretation and a fully formalized dictionary is used, much more complex than the one embodied in Schank's theory. The dictionary is particularly oriented to the generation of scenes, and the stress is on the representation of the structure of objects.The awareness of the structure of the objects is often intimately related to our capability of under standing the. meaning of spatial relationships and other complex linguistic expressions. For instance, the meaning "the cat is under the car" is clear, even if it may depend on the state of the car, moving or parked; on the contrary, the sentence "the cat is under the wall" is not clear, unless the wall is crashed or it has a very particular shape.Our model tries to account t~is understanding activity by means of the following features: -an object is described at several levels of details; in some cases, only a rough definition of the object dimensions can be sufficient, while in other cases a more sophisticated knowledge about the structure of the object itself is required;-the characteristic features of an object are emphasized; the recognition of a feature allows the activation of particular rules and the generation of hypotheses about the presence of an object; -the typical relationships among objects are described.The interaction between syntactic and semantic analyzers seems rather complex, but it provides some valuable solutions to certain crucial points of computational linguistics, like PP's placement, conceptual splitting, idioms and preassembledThe syntactic analyzer, working top-down, yelds a representation of the input sentence in which information about gender, number, person and tense are recorded and for each function such as subj, obj, time, etc.., the ccrresponding filler is identified, or a list of fillers is given in case of ambiguity.These two kinds of information are exactly what is usefull for semantic interpretation and are picked up in various steps of the interaction by the semantic analyzer in order to build the main conceptualization and to fill its role. Also MARGIE (Schank, '75) It also provides a simpler way of dealing with syntactic variants of the same sentence and a help in identifying coreferences.The semantic interpreter works fundamentally bottom-up and, although much is still to be attempted, it seems that it can usefully cooperate with a top-down parser to find the correct interpretation. These practical advantages will be taken into account also in the future development of the system. In fact it seems that, although no definite solution has been given to many linguistic problems, the interaction between two fully developped mechanisms controlling each other can provide an indication and a frame into which a more compact system can be built.In the present version of the system the interaction between the two modules is strictly sequential. In a more compact analyzer, syntactic specialists, i.e. simplified pieces of grammar specialized in particular syntactic phenomena, will be called by semantic interpreter according to opportunity.This second version is still being designed.
null
null
null
null
Main paper: introduction: In this paper a system is presented, which under stands and conceptualizes scenes descriptions in natural language (Italian) and produces simple stat ic images of the scenes. It is part of a larger project that aims at understanding the description of static scenes, reasoning (in case of incompleteness or inconsistency) and dialoguing about them, and finally generating and displaying them.The Input Analyzer (IA) of the system is the most stable end experimented component and it is the topic of this paper. It consists of a Syntactic Analyzer, a Cognitive Data Base (CDB) and a Semantic Interpreter.The syntactic analysis is performed by means of a Procedural Systemic Grammar (PSG) (McCord, 77) . The main characteristics of the PSG parser is that the operation flow is highly structured, since different levels of the analysis are associated to the syntactic units of the sentence. Five processes can be activated (CLAUSE, COMPL.GR, NOUN.GR, ADJ.GR and VERB.GR) devoted to recognize respectively: (i) the sentences, (ii) the propositional phrases, comparatives, quantification and noun phrases, (iii) the components of the noun phrases, (iv) the adjectives and their modifiers, (v) the verb and its modifiers. FEATURE NETWORK:array(FEATURE) of LINK; Each NODE represents s feature identified by its NAME; the ALTERNATE pointer allows the connection in a Circular list of mutually exclusive features as in SHRLDU (Winograd, 72) . Each process gives as output a fragment of the FEATURE NETWORK manipulated to describe the input; this is performed by means of a set of functions which test the presence of a feature in the FEATURE_NETWORK, add and erase features, as described in McCord ('77) . The process is divided into a set of sequential routines,called SLOTs, analyzing the functional components of a Syntactic Unit. In the function:function FILLER(ARGI:PROCESS, ARG2:SETOF_FEATURES):boolean; ARGI activates the appropriate process to fill the caller slot; the second argument of the function selects the set of features to which the called process must be inizialized. This last features-passing mechanism is absent in the original PSG; from our experience, we found it usefull in all the cases in which a choice in a syntactic level is determined by the syperior level or by a more larger context. Thus, for instance, the set of features characterizing a prepositional phrase is determined at the corresponding syntactic level by the preposition and the features of the nominal phrase; but further and not less important selection criteria can be imposed by the verb which is found in the upper level. The output of a simple analysis is shown in Fig.2 ; it gives an idea of the syntactic representation. The choice of PSG is mainly motivated by the possibility of parallel computation. A control structure allowing the parallel computation is: cobegin ... coend; It is a single input-output structure, very usefull to handle alternative choices for the same computational level. In the case of mutually exclusive alternatives only one of the "n" processes activated by a cobegin control structure can end successfully. In the case of not mutually exclusive alternatives, it is still possible to use the cobegin control structure , but it is necessary to define a strategy for the selection of the most suitable alternative when the coend occurs.An experimental implementation in terms of para~ lel computation has been made on a multiprocessor system (Adorni et ai., '79) . Another version of this parser has been implemented in PASCAL (DiManzo et ai.,'79} and a version in FranzLisp is in progress.The organization of knowledge, in this system, is based on a set of THOUGHTs. A THOUGHT is a frame like structure within which new data are interpreted in terms of concepts acquired through previous experience (Minsky,'75), (Schank, Abelson, '77) . Every THOUGHT has a TYPE which determines a set of operations applicable to it.The following predefined types are allowed (Adorni, DiManzo, '83) : -DESCRIPTIVE, that defines the complete description of a physical,abstract,animate or not,object.-PROTOTYPE, that defines the structural part of a physical object in terms of generalized cones (Marr, Nishihara, '78) . An example of definition of simple prototype object is given in Fig.3 . -JOINT, that defines the element of connection between physical objects, in order to build more complex objects or scenes (Fig.4 ).-SPATIALREL, that defines spatial relationships like "on,near,on the left of,..." between objects.All the linguistic relationships like "above,under, behind", and so on, are reduced into quantitative geometrical relationships between the coordinates of some points of the involved objects; this choice is motivated by the possibility of deriving a set of very general inference rules from analytic geometry (Adorni et ai., '82) , (Boggess, '79) , (Boggess, Waltz, '79) . The coordinates of an indefinite point P are given in the form: COORD K OF P (REFERRED_TO A)=H where K is a group of possible coordinates, H a set of values for these coordinates and A is the THOUGHT of the object to which the reference system used is connected. Fig.5 shows the THOUGHT for an use of the preposition "on".A spatialrel type THOUGHT can contain conceptualizations and prototype THOUGHTs; a joint type can contain only its description; a prototype type can contain joint or prototype THOUGHTs or descriptions in terms of generalyzed cones;all these types can be enclosed in a descriptive type which can contain conceptualizations and all the types of THOUGHTs, previously introduced. A descriptive type can include the following fields (Adorni, DiManzo, '83) , (see -POSITION, gives the most common spatial relations between the described object and other objects in standard scenes, in terms of a spatialrel between prototype THOUGHTs; -SUPPORT, contains the indication, in terms of descriptive THOUGHTs, of the objects which are supported in standard situations; -COLOR and MADE, describe the possible set of colors and materials, while WEIGHT contains information about the range of possible weights; -CONTENT, says, in terms of descriptive THOUGHTs, that the normal use of the object is a container for other objects; -DYNAMIC, contains the current expectations about the boundaries of the dimensions of the objects; it can be dinamically updated every time a new object of the same class enters the system's CDB.The Semantic Interpreter of the IA interacts with the Syntactic Analyzer and operates on a set of rules in order to build the concepts a sentence was intended to mean. The output of this module is a Conceptual Dependency Network (Schank, '75) , in which every nominal is substituted by a complex descriptive THOUGHT instantiated from the CDB.Let us illustrate the procedure of analysis considering the following sentence (the translation is word by word in order to reproduce the problems of Italian): (i) "l'uomo dai capelli grigi e' andato a Roma con l'auto di Giuseppe" (the man with the grey hair has gone to Rome with the car of Joseph)The procedure of analysis has several steps:A. Analysis of Words and Simple Phrases During this step the entities which take part into the conceptualization are identified. In fact an indexed identifier Xi is associated to each object referred to in the sentence (each nominal), which points to one or more conceptualizations, contained in the field "descr" of each nominal in the CDB. The adjectives contained in the noun phrases are also analyzed during this step. Each of them adds some conceptualizations which contribute to further individuate the nominal. During this step personal pronouns are identified as:Xi ~=--> ISA(HUMAN) Temporal and local adverbials are also analyzed in this phase in order to assign to the sentence conceptualization a time and place identification according to certain rules described in (Adorni et al., '81) . At the end of this step the sentence (i) is represented as follows:identifier nominal conceptualization Xl uomo (man) Xl <=~ISA(HUMAN) X2 capelli (hair) X2<==>ISA(HAIR) X3 Roma (Rome) X3~=>ISA(CITY) XS<==>NAME(ROME) X4 auto (car) X4<==>ISA(CAR) X5 Giuseppe (Joseph) X5<==>ISA(HUMAN) X5~-->NAME(JOSEPH)The sentence (i) can then be read:(2) "XI da X2 e' andato a X3 con X4 di X5" (XI from X2 is gone to X3 with X4 of X5)The simple phrases of a sentence can either fill conceptual cases of a main conceptualization, thus serving as 'picture producer' (PP), or further ind ! Like "a man is on a chair" viduate a PP. Therefore they can be classified according to whether they modify: a) the nominal that precedes(also not immediately); "i libri di Carlo" ^ (the books of Charles) b) the subject or object independently from their position;"Maria e' andata a Roma con Anna" ^ (Mary has gone to Rome with Ann) c) the action;"Maria e' andata a Roma con la macchina" ^ (Mary has gone to Rome with the car) quires that the structure of the sentence is entirely known and cannot, in any case, be performed before the verb has been analyzed (subject and object are considered type c) modifiers). The modifiers in a), on the contrary, have a local role, limited to the PP they are to modify, and their relation to the sentence structure is marginal. They are, therefore,immediately associated to their corresponding nominals. In (2) "da X2" and "di X5"are of this kind and are consequently linked to X1 and X4 producing:(3) "XI e' andato a X3 con X4" (XI has gone to X3 with X4)In the "descr" field of THOUGHTs Xl and X4 the following information is added:X2 < .... PART OF(X1) X5 <===> OWNERSHIP(X4)The embodying of a modifier creates complex PPs or CLUSTERs. Each CLUSTER has as its HEAD a b) or c) modifier,a conceptual index node modified by the accessory concepts. In our example "l'uomo dai capelli neri", "a Roma", and "con l'auto di Giuseppe" are CLUSTERs, in which the head is always the leftmost nominal.The decision about the embodying of a modifier into its head is related to the classical problem of the placement of PP's. In fact, it is not always the case that a prepositional phrase modifies a conceptual index node; it is often possible that it has to be embodied into another accessory modifier, as in: "il libro dell'uomo dal cappotto blu" At this step "splitting" of a conceptualization often occurs. In the sentence:"Giovanni d~ un colpo a Maria" (lit. John gives a blow to Mary) although two nuclei are present (d~ & colpo),nevertheless the correct interpretation is "Giovanni colpisce Maria" (John hits Mary), instead of "Giovanni trasferisce il possesso dell'oggetto colpo a Maria" (John tansfers the ownership of the object 'blow' to Mary)!!!We have observed that this phenomenon involves conceptualizations based on the primitives of "state", "action", and "spatial relationship" and relies only on the pairs ACTION-STATE, ACTION-SPATIAL RE-LATIONSHIP, and ACTION-ACTION. The regularities ruling the formation of these pairs have been found to depend only upon those conceptual primitives. This keeps the number of rules to be evaluated reasonably small, if compared with the number of CDB entries (~600 entries in the present implementation (Adorni et al., '81) )~ An example will illustrate the mechanism of reduction of the conceptual "splitting" as well as of disambiguation.The pair ACTION-SPATIAL RELATIONSHIP may be represented by: "tirare su il braccio" The compound "tirare su" has the two meanings: -innalzare,(TO(NIL)))) ) X ~ ~ DO == S(Y(CHANGE STATE((FROM(HAPPINESS(N)}) (TO(HAPPINESS(N)))) ) )The context helps disambiguation.In our example, the object of the spatial relationship being a physical object, the first alternative is selected. The rule performs a further control, discovering that the physical object is, in this case, PART OF(HUMAN); the PROPEL primitive is then substituted by the MOVE primitive.The next step performed by the semantic module is the filling of the conceptual cases of the main conceptualization with the THOUGHTs instantiated during the previous steps. Again, standard rules are associated to prepositions and adverbs and hidiosyncrasies are also treated. These rules make use of messages sent by the syntactic component and look at the conceptual syntax of the main conceptualization. Through these rules the cluster"con X4" turns out to be 'instrumental' and the following conceptualization is then produced:(4) X1 .... USE .... OBJ(X4) Since the filler of the instrumental case of the main conceptualization has to be a conceptualization, the rule activated by the "con" modifier fills the instrumental case with (4). In (3), 'a X3' is placed in the destination of the directive case of the main conceptualization, because preposition 'a' is stated to indicate the 'destination' if the main conceptualization contains a PTRANS,PROPEL or MOVE,with empty directive case; otherwise it indicates 'state'. "Andare a Roma" is thus distinguished from "essere a Roma" (to be in Rome). The result, for our example, is: XI< .... PTRANS~---OBJ(XI)~---DIR((FROM(NIL)) (TO (IN X3) ) )The directive case,as shown in the above example is not simply filled with a md; it is filled with a "spatial_relationship-md" pair. This is a general rule for our system, emphasizing the change of coot dinates caused by an action. In our example this means that the primitive PTRANS has moved the object to a point whose coordinates are defined with in the city of Rome. The result of the analysis of (I) is given in Fig.9 . The process of semantic interpretation is applied to every clause in the sentence, identified by a verb or a noun indicating an action. Segmentation into such clauses or nominalized clauses is obviously performed by the syntactic component, which has also non-standard rules for specific classes of (modal) verbs like: dovere (must),volere (to want),potere (can),incominciare (to start) .... These verbs constitute a single main conceptualization together with the embedded infinitive. Simple composition rules have been defined to combine the meaning of clauses (sentences). Thus for conjunction, as in "si alzo',si mise il cappello eapri' la porta"(he stood up,put on its hat and opened the door) the main conceptualizations associated to every proposition are connected by an 'and' relationship.(si alzo') ......................... T1 and (si mise il cappello) T2 >TI and (apri' la porta) T3 >T2A time indication is also associated to every main conceptualization to emphasize the execution order of every action. Conceptual analysis of each single clause (sentence) is activated by this top level structure and at the end the resulting conceptualizations are linked one to the other.In this paper a system for understanding a natural language input to a scene generator has been described. It makes use of a conceptual dependency semantic model, substantially modified in as much as syntax is kept apart from semantic interpretation and a fully formalized dictionary is used, much more complex than the one embodied in Schank's theory. The dictionary is particularly oriented to the generation of scenes, and the stress is on the representation of the structure of objects.The awareness of the structure of the objects is often intimately related to our capability of under standing the. meaning of spatial relationships and other complex linguistic expressions. For instance, the meaning "the cat is under the car" is clear, even if it may depend on the state of the car, moving or parked; on the contrary, the sentence "the cat is under the wall" is not clear, unless the wall is crashed or it has a very particular shape.Our model tries to account t~is understanding activity by means of the following features: -an object is described at several levels of details; in some cases, only a rough definition of the object dimensions can be sufficient, while in other cases a more sophisticated knowledge about the structure of the object itself is required;-the characteristic features of an object are emphasized; the recognition of a feature allows the activation of particular rules and the generation of hypotheses about the presence of an object; -the typical relationships among objects are described.The interaction between syntactic and semantic analyzers seems rather complex, but it provides some valuable solutions to certain crucial points of computational linguistics, like PP's placement, conceptual splitting, idioms and preassembledThe syntactic analyzer, working top-down, yelds a representation of the input sentence in which information about gender, number, person and tense are recorded and for each function such as subj, obj, time, etc.., the ccrresponding filler is identified, or a list of fillers is given in case of ambiguity.These two kinds of information are exactly what is usefull for semantic interpretation and are picked up in various steps of the interaction by the semantic analyzer in order to build the main conceptualization and to fill its role. Also MARGIE (Schank, '75) It also provides a simpler way of dealing with syntactic variants of the same sentence and a help in identifying coreferences.The semantic interpreter works fundamentally bottom-up and, although much is still to be attempted, it seems that it can usefully cooperate with a top-down parser to find the correct interpretation. These practical advantages will be taken into account also in the future development of the system. In fact it seems that, although no definite solution has been given to many linguistic problems, the interaction between two fully developped mechanisms controlling each other can provide an indication and a frame into which a more compact system can be built.In the present version of the system the interaction between the two modules is strictly sequential. In a more compact analyzer, syntactic specialists, i.e. simplified pieces of grammar specialized in particular syntactic phenomena, will be called by semantic interpreter according to opportunity.This second version is still being designed. Appendix:
null
null
null
null
{ "paperhash": [ "adorni|cognitive_models_for_computer_vision", "waltz|visual_analog_representations_for_natural_languages_understanding", "boggess|computational_interpretation_of_english_spatial_prepositions", "marr|representation_and_recognition_of_the_spatial_organization_of_three-dimensional_shapes" ], "title": [ "Cognitive Models for Computer Vision", "Visual Analog Representations for Natural Languages Understanding", "Computational Interpretation of English Spatial Prepositions", "Representation and recognition of the spatial organization of three-dimensional shapes" ], "abstract": [ "This paper is focused on the relations existing between language and vision. Its goal is to discuss how linguistic informations about objects, shapes, positions and spatial relations with other objects can be integrated into a cognitive model tailored to spatial inferencing operations.", "In order for a natural language system to truly \"know what it is talking about,\" it must have a connection to the real-world correlates of language. For language describing physical objects and their relations in a scene, a visual analog representation of the scene can provide a useful target structure to be shared by a language understanding system and a computer vision system. \n \nThis paper discusses the generation of visual analog representations from input English sentences. It also describes the operation of a LISP program which generates such a representation from simple English sentences describing a scene. A sequence of sentences can result in a fairly elaborate model. The program can then answer questions about relationships between the objects, even though the relationships in question may not have been explicit in the original scene description. Results suggest that the direct testing of visual analog representations may be an important way to bypass long chains of reasoning and to thus avoid (he combinational problems inherent in such reasoning methods.", "Abstract : It seems clear to anyone who pays attention to the use of prepositions in language that any one preposition, when used to describe the spatial relationship between different objects can produce strikingly different mental models for different objects. The mental model produced by the description 'a bowl on a table' seems to be somewhat different from that produced by 'a poster on a wall' which in turn is somewhat different from 'a shelf on a wall' which again is different from 'a fly on a ceiling'. It is the contention of this paper that the preposition in conjunction with a small set of features of the objects (mostly perceptual features) can account for such variations in spatial relations. The thesis discusses a means of taking English-language descriptions involving prepositions and their semantic subjects and objects and deriving a three-dimensional model of the spatial relationships of the subject and object. The program takes extended descriptions involving many objects each of which is incorporated into the overall model. Once an object has been described, it is possible to interrogate the model about the relation of the object to any other in the model, without recourse to inference rules of the following kind: 'if A is on B and B is in C then A is (probably) in C'.", "The human visual process can be studied by examining the computational problems associated with deriving useful information from retinal images. In this paper, we apply this approach to the problem of representing three-dimensional shapes for the purpose of recognition. 1. Three criteria, accessibility, scope and uniqueness, and stability and sensitivity, are presented for judging the usefulness of a representation for shape recognition. 2. Three aspects of a representation’s design are considered, (i) the representation’s coordinate system, (ii) its primitives, which are the primary units of shape information used in the representation, and (iii) the organization the representation imposes on the information in its descriptions. 3. In terms of these design issues and the criteria presented, a shape representation for recognition should: (i) use an object-centred coordinate system, (ii) include volumetric primitives of varied sizes, and (iii) have a modular organization. A representation based on a shape’s natural axes (for example the axes identified by a stick figure) follows directly from these choices. 4. The basic process for deriving a shape description in this representation must involve: (i) a means for identifying the natural axes of a shape in its image and (ii) a mechanism for transforming viewer-centred axis specifications to specifications in an object-centred coordinate system. 5. Shape recognition involves: (i) a collection of stored shape descriptions, and (ii) various indexes into the collection that allow a newly derived description to be associated with an appropriate stored description. The most important of these indexes allows shape recognition to proceed conservatively from the general to the specific based on the specificity of the information available from the image. 6. New constraints supplied by a conservative recognition process can be used to extract more information from the image. A relaxation process for carrying out this constraint analysis is described." ], "authors": [ { "name": [ "G. Adorni", "A. Boccalatte", "M. Manzo" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Waltz", "L. Boggess" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "L. Boggess" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Marr", "H. Nishihara" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null ], "s2_corpus_id": [ "197254", "35497951", "60499656", "43759520" ], "intents": [ [], [], [], [] ], "isInfluential": [ false, false, false, false ] }
Problem: The paper aims to address the challenge of understanding and conceptualizing scene descriptions in natural language, specifically in Italian, and generating static images of the scenes. Solution: The paper proposes a system that includes a syntactic analyzer based on Procedural Systemic Grammar, a semantic analyzer based on Conceptual Dependency Theory, and a dictionary to achieve the goal of understanding and generating scenes from natural language descriptions.
497
0.040241
null
null
null
null
null
null
null
null
8444b39351dbd87fa9eae4c9122a5c5e6fd492e0
15073979
null
A Multilevel Approach to Handle Non-Standard Input
In the project "Procedural Dialogue Models" being carried on at the University of Bielefeld we have developed an Incremental multilevel parsing formalism to reconstruct task-oriented dialogues. A major difficulty we have had to overcome is that the dialogues are real ones with numerous
{ "name": [ "Gehrke, Manfred" ], "affiliation": [ null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
8
2
null
In recent NLU-systems a major importance is lald on processing non-standard input.l) The present paper reports on the experiences we have made in the project "Procedural Dialogue Models" reconstructing task~oriented dialogues, which were uttered in a rather colloquial German.2) To this aim we have developed an incremental multilevel parsing formalism (Christaller/Metzlng 82, Gehrke 82, Gehrke 83), based on an extension of the concept of cascaded ATNs (Woods 80). This formalism (see fig. A ) organizes the interaction of several independent processing components, in our case 5. The processing components need not be ATNs; it is up to the user of the formalism to choose the tool for the specific task that suits her/hlm best. 2) The dialogues that we are working with were recorded in the City of Frankfurt/ Main (Klein 79). is an elliptical question (voice rising, when uttered) and on the semantic stage it can be categorized as a GOAL case slot, depending on "zur" and the fact that the NP refers to a building. Since it is at the beginning of a task-oriented dialogue with no task fixed until now, it is categorized as a de~i.af~o~i{,'c~lo..A complete version of this utterance may be "How can, I get to the old opera?"Another possible interpretation may be that X only wants to be confirmed in her/hls assumption that he/she is on the right way to his goal. In this case a correct answer would have been simply "yes". But a decision which interpretation holds true can not be made with the available information.It has been shown how some types of ill4formed input are handled, especially with the help of semantic constraints and pragmatic considerations. At present, our work in this field is laid on handling selfocorrections above the word level, as you will find one in llne 5 of the sample translation.
null
null
null
null
Main paper: i the incremental, multilevel parsing formalism: In recent NLU-systems a major importance is lald on processing non-standard input.l) The present paper reports on the experiences we have made in the project "Procedural Dialogue Models" reconstructing task~oriented dialogues, which were uttered in a rather colloquial German.2) To this aim we have developed an incremental multilevel parsing formalism (Christaller/Metzlng 82, Gehrke 82, Gehrke 83), based on an extension of the concept of cascaded ATNs (Woods 80). This formalism (see fig. A ) organizes the interaction of several independent processing components, in our case 5. The processing components need not be ATNs; it is up to the user of the formalism to choose the tool for the specific task that suits her/hlm best. 2) The dialogues that we are working with were recorded in the City of Frankfurt/ Main (Klein 79). is an elliptical question (voice rising, when uttered) and on the semantic stage it can be categorized as a GOAL case slot, depending on "zur" and the fact that the NP refers to a building. Since it is at the beginning of a task-oriented dialogue with no task fixed until now, it is categorized as a de~i.af~o~i{,'c~lo..A complete version of this utterance may be "How can, I get to the old opera?"Another possible interpretation may be that X only wants to be confirmed in her/hls assumption that he/she is on the right way to his goal. In this case a correct answer would have been simply "yes". But a decision which interpretation holds true can not be made with the available information.It has been shown how some types of ill4formed input are handled, especially with the help of semantic constraints and pragmatic considerations. At present, our work in this field is laid on handling selfocorrections above the word level, as you will find one in llne 5 of the sample translation. Appendix:
null
null
null
null
{ "paperhash": [ "wahlster|over-answering_yes-no_questions:_extended_responses_in_a_nl_interface_to_a_vision_system", "gehrke|syntax,_semantics,_and_pragmatics_in_concert:_an_incremental,_multilevel_approach_in_reconstructing_task-oriented_dialogues", "weischedel|an_improved_heuristic_for_ellipsis_processing", "sondheimer|a_rule-based_approach_to_ill-formed_input", "woods|cascaded_atn_grammars", "kwasny|treatment_of_ungrammatical_and_extra-grammatical_phenomena_in_natural_language_understanding_systems" ], "title": [ "Over-Answering Yes-No Questions: Extended Responses in a NL Interface to a Vision System", "Syntax, Semantics, and Pragmatics in Concert: An Incremental, Multilevel Approach in Reconstructing Task-Oriented Dialogues", "An Improved Heuristic for Ellipsis Processing", "A Rule-Based Approach to Ill-Formed Input", "Cascaded ATN Grammars", "Treatment of ungrammatical and extra-grammatical phenomena in natural language understanding systems" ], "abstract": [ "This paper addresses the problem of overanswering yes-no questions, i.e. of generating extended responses that provide additional information to yes-no questions that pragmatically must be interpreted as wh-questions. Although the general notion of extended responses has already been explored, our paper reports on the first attempt to build a NL system able to elaborate on a response as a result of anticipating obvious follow-up questions, in particular by providing additional case role fillers, by using more specific quantifiers and by generating partial answers to both parts of questions containing coordinating conjunctions. As a further innovation, the system explicitly deals with the informativeness-simplicity tradeoff when generating extended responses. We describe both an efficient implementation of the proposed methods, which use message passing as realized by the FLAVOR mechanism and the extensive linguistic knowledge in corporated in the verbalization component. The structure of the implemented NL generation component is illustrated using a detailed example of the systems\"s performance as an interface to an image understanding system.", "This paper gives an overview of a model for the reconstruction of task-oriented dialogues based on an interactive, multilevel parsing formalism. It is applied to route description dialogues. It will be shown, how the pragmatic aspects of such dialogues are taken into account on different levels of processing. The approach described is based on an extension of the concept of cascaded ATNs. Furthermore this approach uses knowledge sources (KSs) for every participant in the dialogue in which knowledge about the world and a partner model is build up during the analysis of a dialogue. These KSs are supplied to the parsing process, as well. In this paper special importance is laid on the description of the interaction and cooperation of the different processing components of this formalism.", "Several natural language systems (e.g., Bobrow et al., 1977; Hendrix et al., 1978; Kwasny and Sondheimer, 1979) include heuristics for replacement and repetition ellipsis, but not expansion ellipsis. One general strategy has been to substitute fragments into the analysis of the previous input, e.g., substituting parse trees of the elliptical input into the parse trees of the previous input in LIFER (Hendrix, et al., 1978). This only applies to inputs of the same type, e.g., repeated questions.", "Though natural language understanding systems have improved markedly in recent years, they have only begun to consider a major problem of truly natural input: ill-formedness. Quite often natural language input is ill-formed in the sense of being misspelled, ungrammatical, or not entirely meaningful. A requirement for any successful natural language interface must be that the system either intelligently guesses at a user's intent, requests direct clarification, or at the very least, accurately identifies the ill-formedness. This paper presents a proposal for the proper treatment of ill-formed input. Our conjecture is that ill-formedness should be treated as rule-based. Violation of the rules of normal processing should be used to signal ill-formedness. Meta-rules modifying the rules of normal processing should be used for error identification and recovery. These meta-rules correspond to types of errors. Evidence for this conjecture is presented as well as some open ~]estions.", "A generalization of the notion of ATN grammar, called a cascaded ATN (CATN), is presented. CATN's permit a decomposition of complex language understanding behavior into a sequence of cooperating ATN's with separate domains of responsibility, where each stage (called an ATN transducer) takes its input from the output of the previous stage. The paper includes an extensive discussion of the principle of factoring -- conceptual factoring reduces the number of places that a given fact needs to be represented in a grammar, and hypothesis factoring reduces the number of distinct hypotheses that have to be considered during parsing.", "3. When a map, drawing or chart, etc., is part of the material being photo­ graphed the photographer has followed a definite method in “sectioning” the material. It is customary to begin Aiming at the upper left hand comer of a large sheet and to continue from left to right in equal sections with small overlaps. If necessary, sectioning is continued again-beginning below the Arst row and continuing on until complete." ], "authors": [ { "name": [ "W. Wahlster", "H. Marburger", "A. Jameson", "Stephan Busemann" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Manfred Gehrke" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Weischedel", "N. Sondheimer" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "N. Sondheimer", "R. Weischedel" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. Woods" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Kwasny" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null ], "s2_corpus_id": [ "1787204", "3194636", "1727772", "1775172", "6169596", "59707086" ], "intents": [ [ "methodology" ], [ "methodology" ], [], [], [ "methodology" ], [ "methodology" ] ], "isInfluential": [ false, false, false, false, true, false ] }
- Problem: The paper aims to address the challenge of reconstructing task-oriented dialogues that contain numerous acknowledgments. - Solution: The paper proposes the use of an Incremental multilevel parsing formalism to effectively reconstruct these dialogues.
497
0.004024
null
null
null
null
null
null
null
null
85b3d62004ceef6d3282d18a9719e2f6aa34aec2
16954606
null
Towards the Semantics of Sentence Adverbials
In the present paper we argue that the so-called sentence adverbials (typically, adverbs like probably, admittedly,...) should be generated, in
{ "name": [ "Koktova, Eva" ], "affiliation": [ null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
10
6
null
on grounds of their special behaviour in the topic-focus articulation (TFA) of a sentence. From the viewpoint of the translation of CA expressions (and also of the multiple occurrence thereof inside a sentence) into a calculus of intensional logic, it should be noted that the TFA properties of CA expressions are directly correlated to the scope properties thereof. Our approach, which is stated in terms of a lir~istic theory, serves as a basis for an algorithm of analysis of CA for purposes of a system of man-machine communication without a pro-arranged data base.positions of the occurrence of negation. As negation only slightly differs in its distribution on the surface, there is raised a proposal according to which negation (and other minority group adverbs with similar properties) should be generated as a case of CA.CA (including negation and other minority group adverbs) is defined in FCD by its position in the underlying basic ordering of complementations; presumably, it occupies the leftmost, i.e. the communicatively least dynamic position.The TFA properties of CA (also on its multiple occurrence inside a sentence) should be taken into account also in the translation of CA expressions into a calculus of intensional logic because they are directly correlated to the scope properties thereof.The TFA distinctions which are reflected on the surface serve es clues for an algorithm of analysis of CA expressions in written technical texts for purposes of a question answering system without a pre-arranged data base.In the present paper we argue that the so-called sentence adverbials (typically, adverbs like probabl~, admittedl2,... ) as well as certain minority group adverbs (such as especially, also, not, even,...) should be generated-~-in-~ framework of Functional Generative Description (henceforth, FGD), by means of a new complementation (functor, deep case), namely Complementation of Attitude (henceforth, CA).We argue that in the underlying structure of a sentence, CA can occupy several positions in the topic-focus articulation (henceforth, TFA) of a sentence, which coincide with the II THEORETICAL BACKGROUNDFCD is a multilevel system; it consists of a sequence of five levels which are connected by the asymmetrical relation of form and function, which accomts for the phencmen~ of homonymy and synonymy in natural language. The description of a sentence is equivalent to a sequence of its representations on all levels. The difference between the level of (strict, literal, linguistic) meaning (i.e. the underlying, or tectogr~mmatical level -a level of disambiguated linguistic expressions) and the level of surface syntax, being parallel to the difference which is made in transformational grammar between the levels of deep and surface structure, constitutes the strong ~ enerative power of the FGD system; see Sgall et al., 1969) , (Haji~ovA and Sgall, 1980), and (Sgall et al., forthcoming) .The grammar of FGD consists of the generative component in the form of a dependency grammar, which generates underlying (tectogrammatical) representations (henceforth, TRs) of sentences in the form of linear formulas (which can be rendered also in the shane of rooted and projective dependency trees), and of the transductive component, by means of which TRs are translated, step by step, onto the lower levels of FGD.~ost important for the considerations in linguistic theory is the level of meaning -a link between the lower levels of the linguistic system and the (extralinguistic) domain of cognitive (ontological) content. It should be emphasized in this place that the distinctions of the level of meaning are correlated to those of the domain of cognitive content only in the translation of (disambiguated, meaningful) linguistic expressions into a calculus of intensional logic, see ([,~aterna and Sgall, 1980) , (Kosfk and Sgall, 1981) and (~aterna and Sgall, 1983) . Thus, there should be distinguished, on the one hand, the linguistic semantics, which deals only with the distinctions which are structured by the linguistic form, see (Sgall et al., 1977) and also de Saussure's and Hjelmslev's conception of meaning as "form of content", and on the other hand, the logical (cognitive) semantics, which is committed to (conceptions of) the ontological structure of reality and which is used in the interpretation of linguistic expressions with respect to the extralinguistic content in their translation into a logical calculus, e.g. for purposes of natural language understanding.There are two relations defined on the dependency tree of the TR of a sentence: the relation of dependency and the relation of the deep word-order, which means that a TR captures the twofold structuring of (the meaning of) a sentence: its (syntactically based) dependency ~tructure and its (semantico--pragmatically based) communicative structure, i.e. its TFA.In the dependency structure of a sentence the root of the tree reoresents the main verb, and the nodes of the main subtree represent its obligatory, optional and free complementations. The dependency principle is recursive. Each node has labels of three types: lexemic, morphological (such as -plural, -future,...) and syntactic (such as Actor, Locative,...); the syntactic labels may be alternatively viewed as labels on the edges of the tree. Every verb, noun, adjective and adverb has its case frame, i.e. a specificstion of its obligatory and ootional complementstions, see (Panevov~, 1977) .Tooic-Focus Articulation BackgroundIn the communicative structure of a sentence there is captured the deep word-order of the (occurrences of) complementations, corresponding to a hierarchy of degrees of communicative dynamism thereof, as well as the boundary (boundness juncture) between the topic and the focus of a sentence, i.e. between the contextually bound and non-bound elements of the main subtree of a sentence. In fact, the above mentioned communicative distinctions cut across the dependency structure of a sentence; thus, every embedded clause as well as every (complex) phrase has its secondsry TFA, including a secondary boundness juncture. The notion of contextual boundness is broadly conceived: not only a previous mentioning in a text but also a situational activation may cause ~he contextual boundness of an element. ~ The degrees of communicative dynamism of the complementations On the surface we observe different means of how the TFA of a sentence is expressed: cf. the free surface word--order in inflectional languages vs. the various syntactic means in languages with a fixed (grammatical) surface word--order (such as cleft sentences or the existential construction there is in English), or the particles ga-a-~ wa in Japanese. A surface representation~f a sentence is often ambiguous between several possible underlying sources concerning the different placings of the boundness juncture; these possibilities may be disclosed by means of the negation test or the question test, see (Sgsll and Haji~ov~, 1977-78) .occurring in the focus of a sentence (i.e. also in a topicless sentence) obey the scale of the underlying basic ordering of complementations, or systemic ordering (i.e. ordering of all types of complementations on their occurrence in a topicless sentence).In FGD, universe of discourse is conceived as the activated part of the stock of knowledge shared by the speaker and the hearer during the discourse. The stock of shared knowledge is supposed to be dynamic, i.e. changing (being modified) in time during a discourse. The most activated elements of the stock of shared knowledge appear as the communicatively least dynamic occurrences of complementations inside a sentence. The speaker, essentially, is free in the choice of the topics of sentences.By way of illustration of TRs of sentences in FGD, let us observe the surface sentence 1 and one of its TRs (namely the one where the Actor is contextually bound) captured by a (simplified) linear notation and indicated as TR l, where act stands for Actor, art for Attitude, loc for Location, b is a superscript indicating contextual boundness, the slash denotes the boundness juncture of a sentence, and the brackets correspond in a certain way to the edges of the dependency tree. The starting point of our argument is the claim that CA obeys essentially the same pattern of occurrence in the underlying TFA structure of a sentence as the one which was proposed by (Haji~ov~, 1973) for negation.In her conception, negation is an abstract, operator-like functor of FOr without a label on its edge and without pertinence to the TFA of a sentence; the symbol NEG, generated as a label on the node of the functor of negation, must be changed by surface rules into such forms as not, do not, etc.In spite of the alleged non-pertinence of negation to the TFA of a sentence, there are delineated by Haji~ovA exactly three TFA positions (with respect to the position of the verb) in which negation can be generated; out of them, two belong to the primary case (negation occurring in the focus of a sentence) and one belongs to the secondary case (negation occurring in the topic of a sentence).In the scheme which follows we shall see that these three underlying positions are a perfect match to the possibilities of occurrence, in the TFA of a sentence, of CA. ~ In the examples, the scopes of the expressions in question are indicated by arrows. It should be noted that in the primary case (i.e. in (i) and (ii)), the scopes of the expressions in question extend over the focus of a sentence.(i) The verb of a sentence is non-bound (i.e. it occurs in the focus of a sentence). There is negated ("attituded") the relation between the topic and the focus of a sentence.In fact, there is even a fourth possible position of negation and CA in the TFA of a sentence, which can be subcategorized as a subcase of (i): namely, a position where negation and CA are not only less communicatively dynamic than the (non-bound) verb, but where they play the role of the least communicatively dynamic element of a sentence (cf. TRs 2" and 3", also underlying the ambiguous 2 and 3, respectively), this leftmost position coinciding with the position of negation and CA in the underlying basic ordering of complementations. ii) The verb of a sentence is bound (i.e. it occurs in the topic of a sentence). There is negated ("attituded") the relation between the topic and the (nonverbal) focus of a sentence. In this case, negation (or the CA expression) can stand, on the surface, either in the preverbal ,osition, which gives rise to ambiguity with case (i) above (cf. the ambiguous ~urface sentences 2 and 3), or in the ~ostverbal position, which is unambiguou:J (cf. the surface sentences 4 and 5). Terry will run probably to Brooklyn.TR 5 ((Tezryb)act runb-fut / (probablY)at t (Brooklyn)lo c) (iii) The secondary case. The verb is bound and it alone is negated ("attituded"). In this case, negation (or the CA expression) stands, on the surface, in the preverbal position, which gives rise to ambiguity with cases (i) and (ii) above. 6 (= 2) Terry will not run to Brookl,yn. TR 6 ((Terryb)ac t NEG runb-fut / L (Brooklyn)lo c ) 7 (= 3) Terry will probably run to Brooklyn.TR 7 ((Terryb)act (proVablyb)~tt runb-fut / (Brooklyn)lo c)On the basis of the observed coincidence in the behaviour of negstion and CA in the underlying TFA structure of a sentence, we propose that negation and CA should be collapsed, i.e. that negation should be generated as a case of CA (by means of CA). On this prooosal, there would be removed from FGD the only abstract label (NEG) and substituted by the adverb not, which should be viewed as a regular tectogrsmmatical lexical unit occurring in TRs of sentences. Thus, TRs 2, 4 and 6 should be readjusted to a shape where instead of NEw'G, not is generated as bound or non-bound and as accompanied by the label of CA (att). The features in which negation differs from the rest of CA expressions, such as (i) its non-occurrence in the s@ntence-initial position on the surface (~Not, Terry is singing), (ii) its non-occurrence in the function of a loose comolementation in the sentence-final ~ osition (+Terry is singing, not) and iii) its regular occurrence in questions and commands, should be treated as exceptions which do not have the force to overthrow the generalization stated in III C., concerning the behaviour of CA (including negation) in the underlying structure of a declarative sentence. Moreover, as we shall see in III D., not is not an isolated item among the other CA expressions because there are also other minority group adverbs obeying the same paradigm of occurrence in the TFA of a sentence which exhibit the essential idiosyncratic properties of not.On grounds of the evidence supplied in IIIA., there can be made a ~ eneralization according to which CA including negation) occupies, in the underlying basic ordering of complementations, the position of the leftmost, i.e. the least communicatively dynamic element, which means that it occurs inside a sentence (in the primary case, i.e. in (i) and (ii) of IIIA.) as the least communicatively dynamic element of the focus, thus olaying on the surface (with the exception of the preverbal positions) the role of the topic-focus boundary indicator (cf. examples 4 and 5).Thus, CA is defined, as a complementation of FGD, by its position in the underlying basic ordering of complementations. In fact, every adverbial expression which obeys the paradigm of occurrence in the TFA of a sentence as specified in IIIA. (the position in the underlying basic ordering being only one instance thereof -cf. Footnote 2) should be classified as a case of CA, however idiosyncratic it may seem as concerns its lexical semantics, its distributional properties, or its possibilities of paraphrasing.to the single minority adverb groups (and even adverbial ex~ressions belonging to one group) differ in their lexical semantics, distributional properties, and possibilities of oara~hrasing.The groups of CA expressions can be tentatively subcategorized as follows: (i) "style disjuncts" (briefly, honestly, simply,...); (ii) adverbials of viewpoint (in m~ view~ accordin~ to the newspapers,...); (iii) "attitudinal disjuncts" (admittedly, surprisingly, unfortunately,...); (iv) adverbials of subjective certainty (probabl~, possibly, certainly,...); (v) "particularizers" (~, especially,...); (vi) "additives" (also, a~,..~);(vii) . negation (not,Tj--and--(-v-Hi) exclusives (only, even,...).We suppose that groups (i), (ii) and (iii) are open-ended (i.e. productive), whereas the members of groups (iv), (v), (vi), (vii), and (viii) can be listed; these groups can be then labelled as minority adverb ~roups. Out of them, grouos (v) -(viii) exhibit the idiosyncratic properties mentioned above in III B. and III D.Includin 6 other minority adverb ~ into Complementation of deWe argue that there should be included into CA also other minority adverb groups consisting of adverbial expressions (adverbs) which obey the paradigm of occurrence in the TFA of a sentence as specified in IIIA. and which share the essential idiosyncratic properties of not , such as especially, mai_~, also, a~ain, even, and only. All of them"exhibit th-~ropert1-~(ii) and (iii) (as specified in III B.), and only exhibits also (i).We propose, then, that CA should be viewed as a means of generating adverbial expressions which exhibit a special kind of behaviour in the TFA of a sentence (specified in IIIA.) and which can be divided into several groujs; the expressions belonging to the single groups are supposed to be differentiated primarily by their mutual ordering, which dictates their scope properties and whose violation yields ungran~naticality (cf. IV). The adverbial expressions belongingIn the underlying representations of sentences in FGD, CA can be generated essentially on two principles of multiple occurrence of a com~lementation inside a sentence.(i) Firstly, there can be generated in the focus (and in the secondary case, also in the topic) of a sentence clusters of two or more occurrences of CA, which differ in the degrees of their con~unicative dynamism; there hold specific scope relations between them; the CA expression with the highest degree of communicative dynsmism in the cluster has in its scooe the rest of the focus of a sentence (in the ~rimary case), or the rest of the topic (in the secondary case); the other CA expressions in the cluster have in their scopes the rest of the cluster.If the adverbial expressions inside the cluster belong to different groups of CA, they obey a certain kind of ordering (as suggested by the listing in III D.), whose violatio~ yields ungrammaticality (cf. 8 vs. 9). If, however, the adverbial expressions occurring inside the cluster belong to the same group, they cooccur without any restrictions on their order.Terry will run / probably not only to Brookl.yn.9 +Terr 2 will run / only not probably to Brooklyn.If two occurrences of CA are detached by the boundness juncture of a sentence, they may cooccur without any resSrictions on their order because their scopes do not overlap; cf. lO, containing two negations.lO Terry did not sin~ / not because of Mary.(ii) Secondly, we suppose that on the coordinative-appositive principle of multiple occurrence of a complementation inside a sentence, the occurrences of a complementation do not differ in their degrees of communicative dynamism, and hence, that their order does not correspond directly to the principles of the TFA of a sentence: a coordinative or appositive unit presumably occupies, in the underlying representation of a sentence, the position of one "word" in the deep word-order. In TRs of sentences in FGD, coordination and apposition are not represented by means of the dependency tree, but require a special device. Thus, coordinative and appositive occurrences of CA have identical scopes: in ll, probably and certainly have in their scopes Terry will run to Brooklyn, 3 On the multiple occurrence of CA within the loose occurrence thereof or within the coordinative-appositive multiple occurrence thereof, CA expressions do not obey the ordering suggested in III D; cf. a. a. Tragically but not surprisingly, Terry loves Mar~. and in 12, Terry loves Mary. In the linear representation, it is not possible to indicate the scopes by arrows.ll Probably or certainly r Terry will run to Brookl.yn.12 Probably, i.e. far from certainly, Terry loves Mary.In the analysis of simple CA occurrences in sentences in written technical texts within the framework of the question answering system TIBAO (cf. (~gall, 1983) ), cases to be resolved by an algorithm concern, in fact, only those adverbs which may function both as CA and as Complementation of ~nner (such as amusingly, curiously, delightfully, foolishly, naturally, really, reasonably, S~rangely, surprisingly, unexpectedly, ~,... of group (iii), or honestly,~, ~,... of group (i)). The adverbs w-h-~can function only as CA (such as probably, admittedly, unfortunately,...there are at least one hundred of them) should be listed in the lexicon.Presumably, there occurs only one kind of genuine ambiguity with the adverbs which may function in the mentioned two ways (cf. line 8 of the algorithm below); 4 other cases of surface ambiguity can be resolved by an algorithm, due to the underlying TFA distinctions which are reflected on the surface (cf. line 9 of the algorithm below) as well as due to some 4 In cases of genuine ambiguity (such as the one in 8 of the algorithm), the adverbial expression in question (naturally) cannot be resolved automatically because of the lack of surface clues for the disambiguation of the boundness juncture of the sentence: in this case, the adverbial expression in question functions as C~ if it is located in the focus of a sentence, and ~ as non-CA if it is located in the topic of a sentence.
null
null
null
null
Main paper: : on grounds of their special behaviour in the topic-focus articulation (TFA) of a sentence. From the viewpoint of the translation of CA expressions (and also of the multiple occurrence thereof inside a sentence) into a calculus of intensional logic, it should be noted that the TFA properties of CA expressions are directly correlated to the scope properties thereof. Our approach, which is stated in terms of a lir~istic theory, serves as a basis for an algorithm of analysis of CA for purposes of a system of man-machine communication without a pro-arranged data base.positions of the occurrence of negation. As negation only slightly differs in its distribution on the surface, there is raised a proposal according to which negation (and other minority group adverbs with similar properties) should be generated as a case of CA.CA (including negation and other minority group adverbs) is defined in FCD by its position in the underlying basic ordering of complementations; presumably, it occupies the leftmost, i.e. the communicatively least dynamic position.The TFA properties of CA (also on its multiple occurrence inside a sentence) should be taken into account also in the translation of CA expressions into a calculus of intensional logic because they are directly correlated to the scope properties thereof.The TFA distinctions which are reflected on the surface serve es clues for an algorithm of analysis of CA expressions in written technical texts for purposes of a question answering system without a pre-arranged data base.In the present paper we argue that the so-called sentence adverbials (typically, adverbs like probabl~, admittedl2,... ) as well as certain minority group adverbs (such as especially, also, not, even,...) should be generated-~-in-~ framework of Functional Generative Description (henceforth, FGD), by means of a new complementation (functor, deep case), namely Complementation of Attitude (henceforth, CA).We argue that in the underlying structure of a sentence, CA can occupy several positions in the topic-focus articulation (henceforth, TFA) of a sentence, which coincide with the II THEORETICAL BACKGROUNDFCD is a multilevel system; it consists of a sequence of five levels which are connected by the asymmetrical relation of form and function, which accomts for the phencmen~ of homonymy and synonymy in natural language. The description of a sentence is equivalent to a sequence of its representations on all levels. The difference between the level of (strict, literal, linguistic) meaning (i.e. the underlying, or tectogr~mmatical level -a level of disambiguated linguistic expressions) and the level of surface syntax, being parallel to the difference which is made in transformational grammar between the levels of deep and surface structure, constitutes the strong ~ enerative power of the FGD system; see Sgall et al., 1969) , (Haji~ovA and Sgall, 1980), and (Sgall et al., forthcoming) .The grammar of FGD consists of the generative component in the form of a dependency grammar, which generates underlying (tectogrammatical) representations (henceforth, TRs) of sentences in the form of linear formulas (which can be rendered also in the shane of rooted and projective dependency trees), and of the transductive component, by means of which TRs are translated, step by step, onto the lower levels of FGD.~ost important for the considerations in linguistic theory is the level of meaning -a link between the lower levels of the linguistic system and the (extralinguistic) domain of cognitive (ontological) content. It should be emphasized in this place that the distinctions of the level of meaning are correlated to those of the domain of cognitive content only in the translation of (disambiguated, meaningful) linguistic expressions into a calculus of intensional logic, see ([,~aterna and Sgall, 1980) , (Kosfk and Sgall, 1981) and (~aterna and Sgall, 1983) . Thus, there should be distinguished, on the one hand, the linguistic semantics, which deals only with the distinctions which are structured by the linguistic form, see (Sgall et al., 1977) and also de Saussure's and Hjelmslev's conception of meaning as "form of content", and on the other hand, the logical (cognitive) semantics, which is committed to (conceptions of) the ontological structure of reality and which is used in the interpretation of linguistic expressions with respect to the extralinguistic content in their translation into a logical calculus, e.g. for purposes of natural language understanding.There are two relations defined on the dependency tree of the TR of a sentence: the relation of dependency and the relation of the deep word-order, which means that a TR captures the twofold structuring of (the meaning of) a sentence: its (syntactically based) dependency ~tructure and its (semantico--pragmatically based) communicative structure, i.e. its TFA.In the dependency structure of a sentence the root of the tree reoresents the main verb, and the nodes of the main subtree represent its obligatory, optional and free complementations. The dependency principle is recursive. Each node has labels of three types: lexemic, morphological (such as -plural, -future,...) and syntactic (such as Actor, Locative,...); the syntactic labels may be alternatively viewed as labels on the edges of the tree. Every verb, noun, adjective and adverb has its case frame, i.e. a specificstion of its obligatory and ootional complementstions, see (Panevov~, 1977) .Tooic-Focus Articulation BackgroundIn the communicative structure of a sentence there is captured the deep word-order of the (occurrences of) complementations, corresponding to a hierarchy of degrees of communicative dynamism thereof, as well as the boundary (boundness juncture) between the topic and the focus of a sentence, i.e. between the contextually bound and non-bound elements of the main subtree of a sentence. In fact, the above mentioned communicative distinctions cut across the dependency structure of a sentence; thus, every embedded clause as well as every (complex) phrase has its secondsry TFA, including a secondary boundness juncture. The notion of contextual boundness is broadly conceived: not only a previous mentioning in a text but also a situational activation may cause ~he contextual boundness of an element. ~ The degrees of communicative dynamism of the complementations On the surface we observe different means of how the TFA of a sentence is expressed: cf. the free surface word--order in inflectional languages vs. the various syntactic means in languages with a fixed (grammatical) surface word--order (such as cleft sentences or the existential construction there is in English), or the particles ga-a-~ wa in Japanese. A surface representation~f a sentence is often ambiguous between several possible underlying sources concerning the different placings of the boundness juncture; these possibilities may be disclosed by means of the negation test or the question test, see (Sgsll and Haji~ov~, 1977-78) .occurring in the focus of a sentence (i.e. also in a topicless sentence) obey the scale of the underlying basic ordering of complementations, or systemic ordering (i.e. ordering of all types of complementations on their occurrence in a topicless sentence).In FGD, universe of discourse is conceived as the activated part of the stock of knowledge shared by the speaker and the hearer during the discourse. The stock of shared knowledge is supposed to be dynamic, i.e. changing (being modified) in time during a discourse. The most activated elements of the stock of shared knowledge appear as the communicatively least dynamic occurrences of complementations inside a sentence. The speaker, essentially, is free in the choice of the topics of sentences.By way of illustration of TRs of sentences in FGD, let us observe the surface sentence 1 and one of its TRs (namely the one where the Actor is contextually bound) captured by a (simplified) linear notation and indicated as TR l, where act stands for Actor, art for Attitude, loc for Location, b is a superscript indicating contextual boundness, the slash denotes the boundness juncture of a sentence, and the brackets correspond in a certain way to the edges of the dependency tree. The starting point of our argument is the claim that CA obeys essentially the same pattern of occurrence in the underlying TFA structure of a sentence as the one which was proposed by (Haji~ov~, 1973) for negation.In her conception, negation is an abstract, operator-like functor of FOr without a label on its edge and without pertinence to the TFA of a sentence; the symbol NEG, generated as a label on the node of the functor of negation, must be changed by surface rules into such forms as not, do not, etc.In spite of the alleged non-pertinence of negation to the TFA of a sentence, there are delineated by Haji~ovA exactly three TFA positions (with respect to the position of the verb) in which negation can be generated; out of them, two belong to the primary case (negation occurring in the focus of a sentence) and one belongs to the secondary case (negation occurring in the topic of a sentence).In the scheme which follows we shall see that these three underlying positions are a perfect match to the possibilities of occurrence, in the TFA of a sentence, of CA. ~ In the examples, the scopes of the expressions in question are indicated by arrows. It should be noted that in the primary case (i.e. in (i) and (ii)), the scopes of the expressions in question extend over the focus of a sentence.(i) The verb of a sentence is non-bound (i.e. it occurs in the focus of a sentence). There is negated ("attituded") the relation between the topic and the focus of a sentence.In fact, there is even a fourth possible position of negation and CA in the TFA of a sentence, which can be subcategorized as a subcase of (i): namely, a position where negation and CA are not only less communicatively dynamic than the (non-bound) verb, but where they play the role of the least communicatively dynamic element of a sentence (cf. TRs 2" and 3", also underlying the ambiguous 2 and 3, respectively), this leftmost position coinciding with the position of negation and CA in the underlying basic ordering of complementations. ii) The verb of a sentence is bound (i.e. it occurs in the topic of a sentence). There is negated ("attituded") the relation between the topic and the (nonverbal) focus of a sentence. In this case, negation (or the CA expression) can stand, on the surface, either in the preverbal ,osition, which gives rise to ambiguity with case (i) above (cf. the ambiguous ~urface sentences 2 and 3), or in the ~ostverbal position, which is unambiguou:J (cf. the surface sentences 4 and 5). Terry will run probably to Brooklyn.TR 5 ((Tezryb)act runb-fut / (probablY)at t (Brooklyn)lo c) (iii) The secondary case. The verb is bound and it alone is negated ("attituded"). In this case, negation (or the CA expression) stands, on the surface, in the preverbal position, which gives rise to ambiguity with cases (i) and (ii) above. 6 (= 2) Terry will not run to Brookl,yn. TR 6 ((Terryb)ac t NEG runb-fut / L (Brooklyn)lo c ) 7 (= 3) Terry will probably run to Brooklyn.TR 7 ((Terryb)act (proVablyb)~tt runb-fut / (Brooklyn)lo c)On the basis of the observed coincidence in the behaviour of negstion and CA in the underlying TFA structure of a sentence, we propose that negation and CA should be collapsed, i.e. that negation should be generated as a case of CA (by means of CA). On this prooosal, there would be removed from FGD the only abstract label (NEG) and substituted by the adverb not, which should be viewed as a regular tectogrsmmatical lexical unit occurring in TRs of sentences. Thus, TRs 2, 4 and 6 should be readjusted to a shape where instead of NEw'G, not is generated as bound or non-bound and as accompanied by the label of CA (att). The features in which negation differs from the rest of CA expressions, such as (i) its non-occurrence in the s@ntence-initial position on the surface (~Not, Terry is singing), (ii) its non-occurrence in the function of a loose comolementation in the sentence-final ~ osition (+Terry is singing, not) and iii) its regular occurrence in questions and commands, should be treated as exceptions which do not have the force to overthrow the generalization stated in III C., concerning the behaviour of CA (including negation) in the underlying structure of a declarative sentence. Moreover, as we shall see in III D., not is not an isolated item among the other CA expressions because there are also other minority group adverbs obeying the same paradigm of occurrence in the TFA of a sentence which exhibit the essential idiosyncratic properties of not.On grounds of the evidence supplied in IIIA., there can be made a ~ eneralization according to which CA including negation) occupies, in the underlying basic ordering of complementations, the position of the leftmost, i.e. the least communicatively dynamic element, which means that it occurs inside a sentence (in the primary case, i.e. in (i) and (ii) of IIIA.) as the least communicatively dynamic element of the focus, thus olaying on the surface (with the exception of the preverbal positions) the role of the topic-focus boundary indicator (cf. examples 4 and 5).Thus, CA is defined, as a complementation of FGD, by its position in the underlying basic ordering of complementations. In fact, every adverbial expression which obeys the paradigm of occurrence in the TFA of a sentence as specified in IIIA. (the position in the underlying basic ordering being only one instance thereof -cf. Footnote 2) should be classified as a case of CA, however idiosyncratic it may seem as concerns its lexical semantics, its distributional properties, or its possibilities of paraphrasing.to the single minority adverb groups (and even adverbial ex~ressions belonging to one group) differ in their lexical semantics, distributional properties, and possibilities of oara~hrasing.The groups of CA expressions can be tentatively subcategorized as follows: (i) "style disjuncts" (briefly, honestly, simply,...); (ii) adverbials of viewpoint (in m~ view~ accordin~ to the newspapers,...); (iii) "attitudinal disjuncts" (admittedly, surprisingly, unfortunately,...); (iv) adverbials of subjective certainty (probabl~, possibly, certainly,...); (v) "particularizers" (~, especially,...); (vi) "additives" (also, a~,..~);(vii) . negation (not,Tj--and--(-v-Hi) exclusives (only, even,...).We suppose that groups (i), (ii) and (iii) are open-ended (i.e. productive), whereas the members of groups (iv), (v), (vi), (vii), and (viii) can be listed; these groups can be then labelled as minority adverb ~roups. Out of them, grouos (v) -(viii) exhibit the idiosyncratic properties mentioned above in III B. and III D.Includin 6 other minority adverb ~ into Complementation of deWe argue that there should be included into CA also other minority adverb groups consisting of adverbial expressions (adverbs) which obey the paradigm of occurrence in the TFA of a sentence as specified in IIIA. and which share the essential idiosyncratic properties of not , such as especially, mai_~, also, a~ain, even, and only. All of them"exhibit th-~ropert1-~(ii) and (iii) (as specified in III B.), and only exhibits also (i).We propose, then, that CA should be viewed as a means of generating adverbial expressions which exhibit a special kind of behaviour in the TFA of a sentence (specified in IIIA.) and which can be divided into several groujs; the expressions belonging to the single groups are supposed to be differentiated primarily by their mutual ordering, which dictates their scope properties and whose violation yields ungran~naticality (cf. IV). The adverbial expressions belongingIn the underlying representations of sentences in FGD, CA can be generated essentially on two principles of multiple occurrence of a com~lementation inside a sentence.(i) Firstly, there can be generated in the focus (and in the secondary case, also in the topic) of a sentence clusters of two or more occurrences of CA, which differ in the degrees of their con~unicative dynamism; there hold specific scope relations between them; the CA expression with the highest degree of communicative dynsmism in the cluster has in its scooe the rest of the focus of a sentence (in the ~rimary case), or the rest of the topic (in the secondary case); the other CA expressions in the cluster have in their scopes the rest of the cluster.If the adverbial expressions inside the cluster belong to different groups of CA, they obey a certain kind of ordering (as suggested by the listing in III D.), whose violatio~ yields ungrammaticality (cf. 8 vs. 9). If, however, the adverbial expressions occurring inside the cluster belong to the same group, they cooccur without any restrictions on their order.Terry will run / probably not only to Brookl.yn.9 +Terr 2 will run / only not probably to Brooklyn.If two occurrences of CA are detached by the boundness juncture of a sentence, they may cooccur without any resSrictions on their order because their scopes do not overlap; cf. lO, containing two negations.lO Terry did not sin~ / not because of Mary.(ii) Secondly, we suppose that on the coordinative-appositive principle of multiple occurrence of a complementation inside a sentence, the occurrences of a complementation do not differ in their degrees of communicative dynamism, and hence, that their order does not correspond directly to the principles of the TFA of a sentence: a coordinative or appositive unit presumably occupies, in the underlying representation of a sentence, the position of one "word" in the deep word-order. In TRs of sentences in FGD, coordination and apposition are not represented by means of the dependency tree, but require a special device. Thus, coordinative and appositive occurrences of CA have identical scopes: in ll, probably and certainly have in their scopes Terry will run to Brooklyn, 3 On the multiple occurrence of CA within the loose occurrence thereof or within the coordinative-appositive multiple occurrence thereof, CA expressions do not obey the ordering suggested in III D; cf. a. a. Tragically but not surprisingly, Terry loves Mar~. and in 12, Terry loves Mary. In the linear representation, it is not possible to indicate the scopes by arrows.ll Probably or certainly r Terry will run to Brookl.yn.12 Probably, i.e. far from certainly, Terry loves Mary.In the analysis of simple CA occurrences in sentences in written technical texts within the framework of the question answering system TIBAO (cf. (~gall, 1983) ), cases to be resolved by an algorithm concern, in fact, only those adverbs which may function both as CA and as Complementation of ~nner (such as amusingly, curiously, delightfully, foolishly, naturally, really, reasonably, S~rangely, surprisingly, unexpectedly, ~,... of group (iii), or honestly,~, ~,... of group (i)). The adverbs w-h-~can function only as CA (such as probably, admittedly, unfortunately,...there are at least one hundred of them) should be listed in the lexicon.Presumably, there occurs only one kind of genuine ambiguity with the adverbs which may function in the mentioned two ways (cf. line 8 of the algorithm below); 4 other cases of surface ambiguity can be resolved by an algorithm, due to the underlying TFA distinctions which are reflected on the surface (cf. line 9 of the algorithm below) as well as due to some 4 In cases of genuine ambiguity (such as the one in 8 of the algorithm), the adverbial expression in question (naturally) cannot be resolved automatically because of the lack of surface clues for the disambiguation of the boundness juncture of the sentence: in this case, the adverbial expression in question functions as C~ if it is located in the focus of a sentence, and ~ as non-CA if it is located in the topic of a sentence. Appendix:
null
null
null
null
{ "paperhash": [ "kosik|towards_a_semantic_interpretation_of_underlying_structures", "sgall|on_the_role_of_linguistic_semantics", "sgall|a_functional_approach_to_syntax:_in_generative_description_of_language" ], "title": [ "TOWARDS A SEMANTIC INTERPRETATION OF UNDERLYING STRUCTURES", "ON THE ROLE OF LINGUISTIC SEMANTICS", "A functional approach to syntax: in generative description of language" ], "abstract": [ "The paper concentrates on questions of topic and focus. The underlying structures of sentences are translated into formulas of a formal language based on the theory of types in such a way that (i) the scope of negation is identified with the focus and (ii) the order of prenex quantifiers in the resulting formula corresponds to the order of quantified NP's in the underlying representation of a given sentence.", "We want to present some support for the suggestion that a counterpart of Carnap's intensional structure may be specified, for natural language, as the semantic representations of sentences, if belief (and other intensional) contexts are kept apart from metalinguistic assertions. Semantic representations should include the topic/focus articulation and other empirically based issues that can be checked linguistically, while the interplay of meaning postulates and a translation procedure should account for the relationship between semantic representations and propositions. It is also suggested that the relationship between concepts and objects ist not fully symmetrical to that between propositions and truth values.", "Come with us to read a new book that is coming recently. Yeah, this is a new coming book that many people really want to read will you be one of them? Of course, you should be. It will not make you feel so hard to enjoy your life. Even some people think that reading is a hard to do, you must be sure that you can do it. Hard will be felt when you have no ideas about what kind of book to read. Or sometimes, your reading material is not interesting enough." ], "authors": [ { "name": [ "A. Kosik", "P. Sgall" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "P. Sgall", "E. Hajicová", "Oldřich Procházka" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "P. Sgall" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null ], "s2_corpus_id": [ "60561148", "60589693", "46531727" ], "intents": [ [], [ "background" ], [] ], "isInfluential": [ false, false, false ] }
null
497
0.012072
null
null
null
null
null
null
null
null
035a5492784c6d9b864e5596b0b6672ee82e7df9
7531574
null
Rules for Pronominalization
Rigorous interpretation of pronouns is possible when syntax, semantics, and pragmatics of a discourse can be reasonably controlled. Interaction with a database provides such an environment. In the framework of the User Specialty Languages system and Discourse Representation Theory, we formulate strict and preferential rules for pronominalization and outline a procedure to find proper assignments of referents to pronouns.
{ "name": [ "Guenthner, Franz and", "Lehmann, Hubert" ], "affiliation": [ null, null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
18
15
null
The process of pronominalization is governed by rules involving morphological, syntactic, semantic, and pragmatic criteria.These rules are discussed and illustrated with examples drawn from the context of querying a geographical database. Then a procedure is outlined which uses these rules and applies them in the following order:First morphological criteria are checked, if they fail no further tests are required. Then syntactic (or configurational) criteria are tested. Again, if they fail, no further tests are necessary. Next semantic criteria are applied, and if they do not fail, the pragmatic criteria have to be tested. If more than one candidate remains, the use of the pronoun was pragmatically inappropriate and must be noted as such.Morphological criteria concern the agreement of gender and number. Complications come in, when coordinated noun phrases occur, e.g. The starred examples contain inappropriate uses of pronouns. With and-coordination, reference to the complete NP is possible with a plural pronoun. When the members of the coordination are distinct in gender and/or number, reference to them is possible with the corresponding pronouns. Clearly, the same observations hold for interrogative sentences.Syntactic criteria operate only within the boundaries of a sentence, outside they are useless. The configurational critp.ria stemming from DRT however work independent of sentence boundaries.The rule of "disjoint reference" according to Reinhart (1983) goes back to Chomsky and has been refined by Lasnik (1976) and Reinhart (1983) . It is able to handle a variety of well-known cases, such as (9) When did it join the UN? (10) Which countries that import it, produce petrol? (11) *Does it entertain diplomatic relations with Spain's neighbor?(In the starred example, the use of "it" is inappropriate, if it is to be coreferential with "Spain".)Rather than using c-command to formulate this criterion, which is elegant but too strict in some cases (as noted by Reinhart herself and Bolinger (1979) , we have chosen an admittedly less elegant, but hopefully reliable, approach to disjoint reference, in that we specify the concrete syntactic configurations where disjoint reference holds. We do not rely here on the syntactic framework of USL grammar, but use more or less traditionally known terminology for expressing our rules. We need the terms "clause", "phrase", "matrix", "embedding", and "level".These can be made explicit, when a suitable syntactic framework is chosen. Now we can formulate our disjoint reference rule and some of its less obvious consequences.CI. The referent of a personal pronoun can never be within the same clause at the same phrase level. (Note that this rule does not hold for possessive pronouns,)C1 has a number of consequences which we now list:Cla.The (implicit) subject of an infinitve clause can never be referent of a personal pronoun in that clause (12) Does the EC want to dissolve it?Clb.Nouns common to coordinate clauses cannot be referred to from within these coordinate clauses (13) Which country borders it and Spain?Clc.Noun complements of nouns in the same clause can never be referred to.(14) Does it border Spain's neighbors?The following rules have to do with phrases and clauses modifying a noun. They too can be regarded as consequences of C1.C2. Head noun of a phrase or clause can never be referent of a personal pronoun in that phrase or clause C2a. Head noun of participial phrase (15) a country exporting petrol to it C2b. Head noun of that-clause (16) the truth is that it follows from A.C2c. Head noun of relative clause (17) the country it exports petrol toThe following two rules deal with kataphoric pronominalization (sometimes called backward pronominalization).Kataphora into a more deeply embedded clause is impossible (18) Did it export a product that Spain produces? C3b.Kataphora into a succeeding coordinate clause is impossible (19) Who did not belong to it but left the UN?The accessibility relation on DRSs C4. Only those discourse referents in the accessibility relation defined in sec. 2.2 are available as referents to a pronoun.Widely used is the criterion of semantic compatibility.It is usually implemented via "semantic features".In the USL framework we can derive this information from relation schemata. We state the criterion as follows:31. If s is a sentence containing a pronoun p and c a full noun phrase in the context of p. If p is substituted by c in s to yield s' and s' is not semantically anomalous, i.e. does not imply a contradiction, then c is semantically compatible with s and is hence a semantically possible candidate for the reference of p.(20) What is the capital of Austria? -Vienna. What does it export?If it is assumed that only countries but not capitals export goods, then the only semantically possible referent for "it" is Austria.S2. Non-referentially introduced nouns cannot be antecedents of pronouns.(21) Which countries does Italy have trade with? How large is it?Since "trade" is used non-referentially, it cannot be antecedent of "it". Unfortunately, in many cases where this criterion could apply, there is an ambiguity between referential and non-referential use.Apart from the type of semantic compatibility covered by rule S1, more complex semantic properties are used to determine the referent of a pronoun.The "task structures" described by Grosz (1977) illustrate this fact. We hence formulate the rule $3.The properties of and relationships between predicates determine pronorninalizability.For an illustration of its effect, consider the following query:(22) What country is its neighbor?The irreflexivity of the neighbor-relation entails that "its" cannot be bound by "what country" in this case, but has to refer to something mentioned in the previous context. Given a subject domain, one can analyze the properties of the relations and the relationships between them and so build a basis for deciding pronoun reference on semantic grounds.In the framework of the USL system, information on the properties of relations is available in terms of "functional dependencies" given in the database schema or as integrity constraints.The generation of discourse is controlled by two factors:communicative intentions and mutual knowledge. In the context of database interaction, we can assume that the communicative intentions of a user are simply to obtain factual answers to factual questions. His intentions are expressed either by single queries or by sequences of queries, depending on how complex these intentions are or how closely they correspond to the information in the database.As will be shown below, in many cases the system will not have a chance to determine whether a given query is a "one-shot query", or whether it is part of a sequence of queries with a common "theme". For the resolution of pronouns, this means that the system should rather ask the user back than make wild guesses on what might be the most "plausible" referent. This is of course not possible when running text is analyzed in a "batch mode", and no user is there to be asked for clarification.Mutual knowledge (see e.g. Clark and Marshall (1981) for a discussion) determines the rules for introducing and referencing individuals in the discourse.In the context of database interaction we assume the mutual knowledge to consist initially of:the set of proper names in the database, -the predicates whose extensions are in the database, -the "common sense" relationships between and properties of these predicates. It will be part of the design of a database to establish what these "common sense" relationships and properties are,.e.g, whether it is generally known to the user community, whether "capital" expresses a one-one relation. Each question-answer pair occurring in the discourse is added to the stock of mutual knowledge.It is a pragmatic principle of pronominalization that only mutual knowledge may be used to determine the referent of a pronoun on semantic grounds, and hence it may be legal to use the same sentence containing a pronoun where earlier in the discourse it was illegal, because the mutual knowledge has increased in the meantime.What the topic of a discourse is, which of the entities mentioned in it are in focus, is reflected in the syntactic structure of sentences.This has been observed for a long time.It has also often been observed that discourse topic and focus have an effect on pronominalization where morphological, configurational, and semantic rules fail to determine a single Candidate for reference.However, it has not been possible yet to formulate precise rules explaining this phenomenon. We have the impression that such rules cannot be absolutely strict rules, but are of a preferential nature. We have developed a set of such rules and tested them against a corpus of text containing some 600 pronoun occurrences, and have found them to work remarkably well.Similar tests (with a similar set of rules) have been conducted by Hofmann (1976).In the sequel we formulate and discuss our list of rules. Their ordering corresponds to the order in which they have to be applied.Noun phrases within the sentence containing the pronoun are preferred over noun phrases in previous or succeeding sentences.Consider the sequence (23) What country joined the EC after 1980?Greece. (24) What country consumes the wine it produces?One could argue that "Greece" is just as probably the intended referent of "it" in this case as the bound interpretation and that hence the use of "it" should be rejected as inappropriate.However, there is no way to avoid the "it", if the bound variable interpretation is intended, and one can use this as a ground to rule out the interpretation where "it" refers to "Greece".Noun phrases in sentences before the sentence containing the pronoun are preferred over noun phrases in more distant sentences. This criterion is very important to limit the search for possible discourse referents.Pronouns are preferred over full noun phrases.This rule is found in many systems dealing with anaphora.One can motivate it by saying that pronominalization establishes an entity as a theme which is then maintained until the chain of pronouns is broken by a sentence not containing a suitable pronoun. For an example consider:(25) W:lat =s the area of Austria! (26) What is its capital? (27) What is its population? P3. Noun ~hrases in a matrix clause or phrase are preferred over noun phrases in embedded clauses or phrases.Noun phrases in a matrix clause are preferred over noun phrases in embedde~ clauses.Example:(28) What country imports a product that Spain produces? -Denmark. (29) What does it export?Here "it" has to refer to the individual satisfying "what country", not to "Spain" which occurs in an embedded clause.P3b. Head nouns are preferred over noun complements.Example:(30) What is the capital of Austria? -Vienna. (31) What is its population? "Vienna", not "Austria" becomes the referent of "its", and the argument is analogous to that for P3a.Subject noun phrases are preferred over non-subject noun phrases.In declarative contexts, this rule works quite well. It corresponds essentially to the focus rule of Sidher (1981) .In a question-answering situation it is hardly applicable, since especially in wh-questions subject position and word order, which both play a role, tend to interfere.We therefore tend to not use this rule, but rather to let the system ask back in cases where it would apply.For illustration consider the following examples: P6. Noun phrases preceding the pronoun are preferred over noun phrases succeeding the pronoun (or: anaphora is preferred over kataphora).We now outline a procedure for "resolving" pronouns in the framework of the USL system and DRT.Let M = <U, Con> be the DRS representing the mutual knowledge, in particular the past discourse. Let K(s) be the DRS representing the current sentence s and let p be a pronoun occurring in s for which an appropriate discourse referent has to be found.Let U be the set of discourse referents a(p) accessible to p according to the accessibility relation given in sec. 2.2 Let further c be a function that a;)plies to U a(p) all the morphological, syntactic, and semantic criteria, given above and yields a set Uc(p) as result. Now three cases have to be distinguished:1. Uc(p) is empty.In this case the use of p was inappropriate. 2. Card(Uc(p)) is 1. In this case a referent for p has been uniquely determined, p is replaced by it in the DRS, and the procedure is finished. 3. Card(Uc(p)) is greater than 1. In this case the preference rules are applied.Let p be a function that applies to Uc(p) if the cardinality of Uc(p). is greater than 1 all the preference rules given above in the order indicated there yielding the result Up. Card(Up) can never be 0, hence two cases are possible, either the cardinality is 1, then a referent has been uniquely determined and the pronoun p can be eliminated in K, or the cardinality is greater than 1, and then the use of p was inappropriate.It can be inferred from the formulation of the pronominalization rules given above, what morphological and syntactic information has to be stored with the discourse referents in the DRSs, and what semantic information has to be accessible from the schema of the database to enable the application of the functions c and p. Hence, we will not spell out these details here.
1 Overview: Relation to previous work One of the main obstacles of the automated processing of natural language sentences (and a forteriori texts) is the proper treatment of anaphoric relations.Even though there is a plethora of research attempting to specify (both on the theoretical level as well as in connection with implementations)"strategies" for "pronoun resolution", it is fair to say a) that no uniform and comprehensive treatment of anaphora has yet been attained b) that surprisingly little effort has been spent in applying the results of research in linguistics and formal semantics in actual implemented systems.A quick glance at Hirst (1981) will confirm that there is a large gap between the kinds of theoretical issues and puzzling cases that have been considered on the one hand in the setting of computational linguistics and on the other in recent semantically oriented approaches to the formal analysis of natural languages.One of the main aims of this paper is to bridge this gap by combining recent efforts forthcoming in formal semantics (based on Montague grammar and Discourse Representation Theory) with existing and relatively comprehensive grammars of German and English constructed in connection with the User Specialty Languages (USL) system, a natural language database query system briefly described below.We have drawn extensively --as far as insights, examples, puzzles and adequacy conditions are concerned --on the various "variable binding" approaches to pronouns (e. 9, work in the Montague tradition, the illuminating discussion by Evans (1980) and Webber (1978) , as well as recent transformational accounts).Our approach has however been most deeply influenced by those who have (like Smaby (1979) , (1981) and Kamp (1981) ) advocated dispensing with pronoun indexing on the one hand and by those (like Chastain (1973) , Evans (1980) , and Kamp (1981) ) who have emphasized the "referential" function of certain uses of indefinite noun phrases.Contrary to what is assumed in most theories of pronominalization (namely that the most propitious way of dealing with pronouns is to consider them as a kind of indexed variable), we agree with Kamp (1981) and Smaby (1979) in treating pronouns as bona fide lexical elements at the level of syntactic representation.Treatments of anaphora have taken place within two quite distinct settings, so it seems. On the one hand, linguists have primarily been concerned with the specification of mainly syntactic criteria in determining the proper "binding" and "disjointness" criteria (cf. below), whereas computational linguists have in general paid more attention to anaphoric relations in texts, where semantic and pragmatic features play a much greater role.In trying to relate the two approaches one should be aware that in the absence of any serious theory of text understanding, any attempt to deal with anaphora in unrestricted domains (even if they are simple enough as for instance children's stories), will encounter so many diverse problems which, even when they influence anaphoric relations, are completely beyond the scope of a systematic treatment at the present moment. We have thought it to be important therefore to impose some constraints right from the start on the type of discourse with respect to which our treatment of anaphora is to be validated (or falsified).Of course, what we are going to say should in principle be extendible to more complex types of discourse in the future.The context of the present inquiry is the query-in9 of relational databases {as opposed to say general discourse analysis). The type of discourse we are interested in are thus dialogues in the settlng of a relational database (which may be said to represent both the context of queries and answers as well as the "world").It should be clear that a wide variety of anaphoric expressions is available in this kind of interaction; on the other hand, the relevant knowledge we assume in resolving pronominal relations must come from the information specified in the database (in the relations, in the various dependencies and integrity constraints) and in the rules governing the language.We are making the following assumptions for database querying.A query dialogue is a sequence of pairs <query,answer>. For the sake of simplicity we assume that the possible answers are of the form yes/no answer singleton answer (e.g. Spain, to a query like "Who borders Portugal?") set answer ([France, Portugal ders Spain?") multiple answer ( [<France, Spain>, borders who?) and refusal (when a pronoun cannot receive a proper interpretation) to a query like "Who bor-• . I to a query like "WhoThe USL system (Lehmann (1978) , Ott and Zoeppritz (1979) , Lehmann (1980) ) provides an interface to a relational data base management system for data entry, query, and manipulation via restricted natural language. The USL System translates input queries expressed in a natural language (currently German (Zoeppritz (1983) , English, and Spanish (SopeSa (1982))) into expressions in the SQL query language, and evaluates those expressions through the use of System R (Astrahan &al (1976) ).The prototype built has been validated with real applications and thus shown its usability.The system consists of (1) a language processing component (ULG),(2) grammars for German, English, and Spanish, (3) a set of 75 interpretation routines, (4) a code generator for SQL, and (5) the data base management system System R. USL runs under VM/CMS in a virtual machine of 7 MBytes, working set size is 1.8 MBytes.ULG, interpretation routines, and code generator comprise approximately 40,000 lines of PL/I code.The syntax component of USL uses the User Language Generator (ULG) which originates from the Paris Scientific Center of IBM France and has been described by Bertrand 8al (1976) . ULG consists of a parser, a semantic executer, the grammar META, and META interpretation routines. META is used to process the grammar of a language. ULG accepts general phrase structure grammars written in a modified Backus-Naur-Form. With any rule it allows the specification of arbitrary, routines to control its application or to perform arbitrary actions, and it allows sophisticated checking and setting of syntactic features. Grammars for German, English, and Spanish have been described in a form accepted by ULG. The grammars provide rules for those fragments of the languages relevant for communicating with a database.The USL grammars have been constructed in such a way that constituents correspond as closely as possible to semantic relationships in the sentence, and that parsing is made as efficient as possible. Where a true representation of the semantic relationships in the parse tree could not be achieved, the burden was put on the interpretation routines to remedy the situation.The approach to interpretation in the USL system builds on the ideas of model theoretic semantics. This implies that the meaning of structure words and syntactic constructions is interpreted systematically and independent of the contents of a given database. Furthermore, since a relational database can be regarded as a (partial) model in the sense of model theory, the interpretation of natural language concepts in terms of relations is quite natural.(A more detailed discussion can be found in Lehmann (1978) .) In the USL system, extensions of concepts are represented as virtual relations of a relational database which are defined on physically stored relations (base relations). The set of virtual relations represents the conceptual knowledge about the data and is directly linked to natural language words and phrases.This approach has the advantage that extensions of concepts can relatively easily be related to objects of conventional databases.For illustration of the connection between virtual relations and words, consider the following example. Suppose that for a geographical application someone has arranged the data in the form of the relation CO (COUNTRY,CAPITAL, AREA, POPULATION) Now virtual relations such as the following which correspond to concepts can be formed by simply projecting out the appropriate columns of CO:CAPITAL (NOM_CAPITAL, OF_COUNTRY)Standard role names (OF, NOM .... ) establish the connection between syntactic constructions and columns of virtual relations and enable answering questions such as(1) What is Austria's capital? in a straightforward and simple way. Standard role names are surface oriented because this makes it possible for a user not trained in linguistics to define his own words and relations.(For a complete list of standard role names see e.g. Zoeppritz (1983) .)We are currently working on the integration of the concepts underlying the USL system with Discourse Representation Theory which is described in the next section. We have already implemented a procedure which generates Discourse Representation Structures from USL's semantic trees and which covers the entire fragment of language described in Kamp (1981) .In this section we give a brief description of Kamp's Discourse Representation Theory (DRT) in as much as it relates to our concerns with pronominalization.For a more detailed discussion of this theory and its general ramifications for natural language processing, cf. the papers by Kamp (1981) and Guenthner (1983a Guenthner ( , 1983b .According to DRT, each natural language sentence (or discourse) is associated with a so-called Discourse Representation Structure (DRS) on the basis of a set of DRS formation rules. These rules are sensitive to both the syntactic structure of the sentences in question as well as to the DRS context in which in the sentence occurs.In the formulation of Kamp (1981) the latter is really of importance only in connection with the proper analysis of pronouns. We feel on the other hand that the DRS environment of a sentence to be processed should determine much more than just the anaphoric assignments.We shall discuss this issue -in particular as it relates to problems of ambiguity and vagueness -in more depth in a forthcoming paper.A DRS K for a discourse has the general formK = <U, Con>where U is a set of "discourse referents" for K and Con a set of "conditions" on these individuals. Conditions can be either atomic or complex. An atomic condition has the formP(tl ..... tn) or tl=cwhere ti is a discourse referent and c a proper name and P an n-place predicate.The only complex condition we shall discuss here is the one representing universally quantified noun phrases or conditional sentences. Both are treated in much the same way. Let us call these "implicational" conditions:K1 IMP K2where K1 and K2 are also DRSs. With a discourse D is thus associated a Discourse Representation structure which represents D in a quantifier-free "clausal" form, and which captures the propositional import of the discourse by -among other things, establishing the correct pronominal connections.What is important for the treatment of anaphora in the present context is the following: a) Given a discourse with a principal DRS Ko and a set of non-principal DRSs (or conditions) Ki among its conditions all discourse referents of Ko are admissible referents for pronouns in sentences or (phrases) giving rise to the various embedded Ki's.In particular, all occurrences of proper names in a discourse will always be associated with discourse referents of the principal DRS Ko. (This is on the (admittedly unrealistic) assumption that proper names refer uniquely.) b) Given an implicational DRS of the form K1 IMP K2 occurring in a DRS K, a relation of relative accessibility between DRSs is defined as follows:K1 is accessible from K2 and all K' accessible from K1 are also accessible from K2.In particular, the principal DRS Ko is accessible from its subordinate DRSs (for a precise definition cf. Kamp (1981) ).The import of this definition for anaphora is simply that if a pronoun is being resolved (i.e. interpreted) in the context of a DRS K' from which a set K of DRSs is accessible, then the union of all the sets of discourse referents associated with every Ki in K is the set of admissible candidates for the interpretation of the pronoun.The following illustrations will make this clear:K(Every country imports a product it needs) ul u2 country(u1) IMP import (ul,u2) product u2need (ul,u2) This sentence (as well as its interrogative version) allows only one interpretation of the pronoun it according to DRT. It does not introduce any discourse referent available for pronominalization in later sentences (or queries).But in a DRS like the following, DRT does not -as it stands -account for pronoun resolution: In general, then, given a sentence (or discourse) represented in a DRS there will be more candidates for admissible pronoun assignments as one should like to have available when a particular pronoun is to be interpreted. The rules described in Section 3 are meant to capture some of the regularities that arise in typical database querying interactions. c) Finally, given a DRS fora discourse D we can say that a pronoun is properly referential iff it is represented by (i.e. eliminated in favor of) a discourse referent ui occurring in the domain of the principal DRS representing D. (In the context of the constructions illustrated so far, this will be true in particular of proper names as well as of indefinite noun phrases not in the scope of of a universal noun phrase or a conditional.)K(The main problem then for the treatment of anaphora is to determine which possible discourse referents should be chosen when we come to the interpretation of a particular pronoun occurrence pi in the formation of the extension of the DRS in which we are working.We would like to suggest the following strategy as a starting point. Consider a query dialogue Q with an already established DRS K and the utterance of a query S, where S contains occurrences of personal pronouns.Suppose further that A(S) is the sole syntactic analysis available for S. Then we regard the construction of the extension of the DRS obtained on the basis of S and K as the value of a partial function f defined on K and A(S). More generally still, as Kamp himself suggests, we can regard the "meaning" (or information content) of a sentence to be that partial function from DRSs to DRSs.In a given dialogue both the queries and the answers will have the side effect of introducing new individuals and "preference" or "salience" orderings on these individuals, and we want to allow for pronominal reference to these much in the same way that in a text preceding sentences may have determined a set of possible antecedents for pronouns in the curren~!y processed sentence.The DRS built up in the process of a querying session will constitute the "mutual knowledge" available to the user in specifying his further queries as well as in his uses of pronouns. It is on the individuals introduced in the DRSs that the rules to be discussed below are intended to operate.
null
null
Many well-known and puzzling cases have not been addressed here, among them plural anaphora, so-called pronouns of laziness, one pronominalization, to name just a few.We have not said anything about phenomena such as discourse topic, focus, or coherence and their influence on anaphora. Their effects are captured in our preference rules to some degree, but no one can precisely say how. Inspire of claims to the contrary, we believe that much work is still required, before these notions can be used effectively in natural language processing.limiting ourselves to the relatively well-defined communicative situation of database interaction, we have been able to state precisely, what rules are applicable in the fragment of language we are dealing with. We are currently working on the analysis of running texts, but again in a well-delineated domain, and we hope to be able to extend our theory on the basis of the experience gained.We are convinced that serious progress in the understanding of anaphora and of discourse phenomena in general is only possible through a careful control of the environment, and on a solid syntactic and semantic foundation.
Main paper: background: Contrary to what is assumed in most theories of pronominalization (namely that the most propitious way of dealing with pronouns is to consider them as a kind of indexed variable), we agree with Kamp (1981) and Smaby (1979) in treating pronouns as bona fide lexical elements at the level of syntactic representation.Treatments of anaphora have taken place within two quite distinct settings, so it seems. On the one hand, linguists have primarily been concerned with the specification of mainly syntactic criteria in determining the proper "binding" and "disjointness" criteria (cf. below), whereas computational linguists have in general paid more attention to anaphoric relations in texts, where semantic and pragmatic features play a much greater role.In trying to relate the two approaches one should be aware that in the absence of any serious theory of text understanding, any attempt to deal with anaphora in unrestricted domains (even if they are simple enough as for instance children's stories), will encounter so many diverse problems which, even when they influence anaphoric relations, are completely beyond the scope of a systematic treatment at the present moment. We have thought it to be important therefore to impose some constraints right from the start on the type of discourse with respect to which our treatment of anaphora is to be validated (or falsified).Of course, what we are going to say should in principle be extendible to more complex types of discourse in the future.The context of the present inquiry is the query-in9 of relational databases {as opposed to say general discourse analysis). The type of discourse we are interested in are thus dialogues in the settlng of a relational database (which may be said to represent both the context of queries and answers as well as the "world").It should be clear that a wide variety of anaphoric expressions is available in this kind of interaction; on the other hand, the relevant knowledge we assume in resolving pronominal relations must come from the information specified in the database (in the relations, in the various dependencies and integrity constraints) and in the rules governing the language.We are making the following assumptions for database querying.A query dialogue is a sequence of pairs <query,answer>. For the sake of simplicity we assume that the possible answers are of the form yes/no answer singleton answer (e.g. Spain, to a query like "Who borders Portugal?") set answer ([France, Portugal ders Spain?") multiple answer ( [<France, Spain>, borders who?) and refusal (when a pronoun cannot receive a proper interpretation) to a query like "Who bor-• . I to a query like "WhoThe USL system (Lehmann (1978) , Ott and Zoeppritz (1979) , Lehmann (1980) ) provides an interface to a relational data base management system for data entry, query, and manipulation via restricted natural language. The USL System translates input queries expressed in a natural language (currently German (Zoeppritz (1983) , English, and Spanish (SopeSa (1982))) into expressions in the SQL query language, and evaluates those expressions through the use of System R (Astrahan &al (1976) ).The prototype built has been validated with real applications and thus shown its usability.The system consists of (1) a language processing component (ULG),(2) grammars for German, English, and Spanish, (3) a set of 75 interpretation routines, (4) a code generator for SQL, and (5) the data base management system System R. USL runs under VM/CMS in a virtual machine of 7 MBytes, working set size is 1.8 MBytes.ULG, interpretation routines, and code generator comprise approximately 40,000 lines of PL/I code.The syntax component of USL uses the User Language Generator (ULG) which originates from the Paris Scientific Center of IBM France and has been described by Bertrand 8al (1976) . ULG consists of a parser, a semantic executer, the grammar META, and META interpretation routines. META is used to process the grammar of a language. ULG accepts general phrase structure grammars written in a modified Backus-Naur-Form. With any rule it allows the specification of arbitrary, routines to control its application or to perform arbitrary actions, and it allows sophisticated checking and setting of syntactic features. Grammars for German, English, and Spanish have been described in a form accepted by ULG. The grammars provide rules for those fragments of the languages relevant for communicating with a database.The USL grammars have been constructed in such a way that constituents correspond as closely as possible to semantic relationships in the sentence, and that parsing is made as efficient as possible. Where a true representation of the semantic relationships in the parse tree could not be achieved, the burden was put on the interpretation routines to remedy the situation.The approach to interpretation in the USL system builds on the ideas of model theoretic semantics. This implies that the meaning of structure words and syntactic constructions is interpreted systematically and independent of the contents of a given database. Furthermore, since a relational database can be regarded as a (partial) model in the sense of model theory, the interpretation of natural language concepts in terms of relations is quite natural.(A more detailed discussion can be found in Lehmann (1978) .) In the USL system, extensions of concepts are represented as virtual relations of a relational database which are defined on physically stored relations (base relations). The set of virtual relations represents the conceptual knowledge about the data and is directly linked to natural language words and phrases.This approach has the advantage that extensions of concepts can relatively easily be related to objects of conventional databases.For illustration of the connection between virtual relations and words, consider the following example. Suppose that for a geographical application someone has arranged the data in the form of the relation CO (COUNTRY,CAPITAL, AREA, POPULATION) Now virtual relations such as the following which correspond to concepts can be formed by simply projecting out the appropriate columns of CO:CAPITAL (NOM_CAPITAL, OF_COUNTRY)Standard role names (OF, NOM .... ) establish the connection between syntactic constructions and columns of virtual relations and enable answering questions such as(1) What is Austria's capital? in a straightforward and simple way. Standard role names are surface oriented because this makes it possible for a user not trained in linguistics to define his own words and relations.(For a complete list of standard role names see e.g. Zoeppritz (1983) .)We are currently working on the integration of the concepts underlying the USL system with Discourse Representation Theory which is described in the next section. We have already implemented a procedure which generates Discourse Representation Structures from USL's semantic trees and which covers the entire fragment of language described in Kamp (1981) .In this section we give a brief description of Kamp's Discourse Representation Theory (DRT) in as much as it relates to our concerns with pronominalization.For a more detailed discussion of this theory and its general ramifications for natural language processing, cf. the papers by Kamp (1981) and Guenthner (1983a Guenthner ( , 1983b .According to DRT, each natural language sentence (or discourse) is associated with a so-called Discourse Representation Structure (DRS) on the basis of a set of DRS formation rules. These rules are sensitive to both the syntactic structure of the sentences in question as well as to the DRS context in which in the sentence occurs.In the formulation of Kamp (1981) the latter is really of importance only in connection with the proper analysis of pronouns. We feel on the other hand that the DRS environment of a sentence to be processed should determine much more than just the anaphoric assignments.We shall discuss this issue -in particular as it relates to problems of ambiguity and vagueness -in more depth in a forthcoming paper.A DRS K for a discourse has the general formK = <U, Con>where U is a set of "discourse referents" for K and Con a set of "conditions" on these individuals. Conditions can be either atomic or complex. An atomic condition has the formP(tl ..... tn) or tl=cwhere ti is a discourse referent and c a proper name and P an n-place predicate.The only complex condition we shall discuss here is the one representing universally quantified noun phrases or conditional sentences. Both are treated in much the same way. Let us call these "implicational" conditions:K1 IMP K2where K1 and K2 are also DRSs. With a discourse D is thus associated a Discourse Representation structure which represents D in a quantifier-free "clausal" form, and which captures the propositional import of the discourse by -among other things, establishing the correct pronominal connections.What is important for the treatment of anaphora in the present context is the following: a) Given a discourse with a principal DRS Ko and a set of non-principal DRSs (or conditions) Ki among its conditions all discourse referents of Ko are admissible referents for pronouns in sentences or (phrases) giving rise to the various embedded Ki's.In particular, all occurrences of proper names in a discourse will always be associated with discourse referents of the principal DRS Ko. (This is on the (admittedly unrealistic) assumption that proper names refer uniquely.) b) Given an implicational DRS of the form K1 IMP K2 occurring in a DRS K, a relation of relative accessibility between DRSs is defined as follows:K1 is accessible from K2 and all K' accessible from K1 are also accessible from K2.In particular, the principal DRS Ko is accessible from its subordinate DRSs (for a precise definition cf. Kamp (1981) ).The import of this definition for anaphora is simply that if a pronoun is being resolved (i.e. interpreted) in the context of a DRS K' from which a set K of DRSs is accessible, then the union of all the sets of discourse referents associated with every Ki in K is the set of admissible candidates for the interpretation of the pronoun.The following illustrations will make this clear:K(Every country imports a product it needs) ul u2 country(u1) IMP import (ul,u2) product u2need (ul,u2) This sentence (as well as its interrogative version) allows only one interpretation of the pronoun it according to DRT. It does not introduce any discourse referent available for pronominalization in later sentences (or queries).But in a DRS like the following, DRT does not -as it stands -account for pronoun resolution: In general, then, given a sentence (or discourse) represented in a DRS there will be more candidates for admissible pronoun assignments as one should like to have available when a particular pronoun is to be interpreted. The rules described in Section 3 are meant to capture some of the regularities that arise in typical database querying interactions. c) Finally, given a DRS fora discourse D we can say that a pronoun is properly referential iff it is represented by (i.e. eliminated in favor of) a discourse referent ui occurring in the domain of the principal DRS representing D. (In the context of the constructions illustrated so far, this will be true in particular of proper names as well as of indefinite noun phrases not in the scope of of a universal noun phrase or a conditional.)K(The main problem then for the treatment of anaphora is to determine which possible discourse referents should be chosen when we come to the interpretation of a particular pronoun occurrence pi in the formation of the extension of the DRS in which we are working.We would like to suggest the following strategy as a starting point. Consider a query dialogue Q with an already established DRS K and the utterance of a query S, where S contains occurrences of personal pronouns.Suppose further that A(S) is the sole syntactic analysis available for S. Then we regard the construction of the extension of the DRS obtained on the basis of S and K as the value of a partial function f defined on K and A(S). More generally still, as Kamp himself suggests, we can regard the "meaning" (or information content) of a sentence to be that partial function from DRSs to DRSs.In a given dialogue both the queries and the answers will have the side effect of introducing new individuals and "preference" or "salience" orderings on these individuals, and we want to allow for pronominal reference to these much in the same way that in a text preceding sentences may have determined a set of possible antecedents for pronouns in the curren~!y processed sentence.The DRS built up in the process of a querying session will constitute the "mutual knowledge" available to the user in specifying his further queries as well as in his uses of pronouns. It is on the individuals introduced in the DRSs that the rules to be discussed below are intended to operate. interplay of syntax, semantics, and pragmatics in pronominalization: The process of pronominalization is governed by rules involving morphological, syntactic, semantic, and pragmatic criteria.These rules are discussed and illustrated with examples drawn from the context of querying a geographical database. Then a procedure is outlined which uses these rules and applies them in the following order:First morphological criteria are checked, if they fail no further tests are required. Then syntactic (or configurational) criteria are tested. Again, if they fail, no further tests are necessary. Next semantic criteria are applied, and if they do not fail, the pragmatic criteria have to be tested. If more than one candidate remains, the use of the pronoun was pragmatically inappropriate and must be noted as such.Morphological criteria concern the agreement of gender and number. Complications come in, when coordinated noun phrases occur, e.g. The starred examples contain inappropriate uses of pronouns. With and-coordination, reference to the complete NP is possible with a plural pronoun. When the members of the coordination are distinct in gender and/or number, reference to them is possible with the corresponding pronouns. Clearly, the same observations hold for interrogative sentences.Syntactic criteria operate only within the boundaries of a sentence, outside they are useless. The configurational critp.ria stemming from DRT however work independent of sentence boundaries.The rule of "disjoint reference" according to Reinhart (1983) goes back to Chomsky and has been refined by Lasnik (1976) and Reinhart (1983) . It is able to handle a variety of well-known cases, such as (9) When did it join the UN? (10) Which countries that import it, produce petrol? (11) *Does it entertain diplomatic relations with Spain's neighbor?(In the starred example, the use of "it" is inappropriate, if it is to be coreferential with "Spain".)Rather than using c-command to formulate this criterion, which is elegant but too strict in some cases (as noted by Reinhart herself and Bolinger (1979) , we have chosen an admittedly less elegant, but hopefully reliable, approach to disjoint reference, in that we specify the concrete syntactic configurations where disjoint reference holds. We do not rely here on the syntactic framework of USL grammar, but use more or less traditionally known terminology for expressing our rules. We need the terms "clause", "phrase", "matrix", "embedding", and "level".These can be made explicit, when a suitable syntactic framework is chosen. Now we can formulate our disjoint reference rule and some of its less obvious consequences.CI. The referent of a personal pronoun can never be within the same clause at the same phrase level. (Note that this rule does not hold for possessive pronouns,)C1 has a number of consequences which we now list:Cla.The (implicit) subject of an infinitve clause can never be referent of a personal pronoun in that clause (12) Does the EC want to dissolve it?Clb.Nouns common to coordinate clauses cannot be referred to from within these coordinate clauses (13) Which country borders it and Spain?Clc.Noun complements of nouns in the same clause can never be referred to.(14) Does it border Spain's neighbors?The following rules have to do with phrases and clauses modifying a noun. They too can be regarded as consequences of C1.C2. Head noun of a phrase or clause can never be referent of a personal pronoun in that phrase or clause C2a. Head noun of participial phrase (15) a country exporting petrol to it C2b. Head noun of that-clause (16) the truth is that it follows from A.C2c. Head noun of relative clause (17) the country it exports petrol toThe following two rules deal with kataphoric pronominalization (sometimes called backward pronominalization).Kataphora into a more deeply embedded clause is impossible (18) Did it export a product that Spain produces? C3b.Kataphora into a succeeding coordinate clause is impossible (19) Who did not belong to it but left the UN?The accessibility relation on DRSs C4. Only those discourse referents in the accessibility relation defined in sec. 2.2 are available as referents to a pronoun.Widely used is the criterion of semantic compatibility.It is usually implemented via "semantic features".In the USL framework we can derive this information from relation schemata. We state the criterion as follows:31. If s is a sentence containing a pronoun p and c a full noun phrase in the context of p. If p is substituted by c in s to yield s' and s' is not semantically anomalous, i.e. does not imply a contradiction, then c is semantically compatible with s and is hence a semantically possible candidate for the reference of p.(20) What is the capital of Austria? -Vienna. What does it export?If it is assumed that only countries but not capitals export goods, then the only semantically possible referent for "it" is Austria.S2. Non-referentially introduced nouns cannot be antecedents of pronouns.(21) Which countries does Italy have trade with? How large is it?Since "trade" is used non-referentially, it cannot be antecedent of "it". Unfortunately, in many cases where this criterion could apply, there is an ambiguity between referential and non-referential use.Apart from the type of semantic compatibility covered by rule S1, more complex semantic properties are used to determine the referent of a pronoun.The "task structures" described by Grosz (1977) illustrate this fact. We hence formulate the rule $3.The properties of and relationships between predicates determine pronorninalizability.For an illustration of its effect, consider the following query:(22) What country is its neighbor?The irreflexivity of the neighbor-relation entails that "its" cannot be bound by "what country" in this case, but has to refer to something mentioned in the previous context. Given a subject domain, one can analyze the properties of the relations and the relationships between them and so build a basis for deciding pronoun reference on semantic grounds.In the framework of the USL system, information on the properties of relations is available in terms of "functional dependencies" given in the database schema or as integrity constraints.The generation of discourse is controlled by two factors:communicative intentions and mutual knowledge. In the context of database interaction, we can assume that the communicative intentions of a user are simply to obtain factual answers to factual questions. His intentions are expressed either by single queries or by sequences of queries, depending on how complex these intentions are or how closely they correspond to the information in the database.As will be shown below, in many cases the system will not have a chance to determine whether a given query is a "one-shot query", or whether it is part of a sequence of queries with a common "theme". For the resolution of pronouns, this means that the system should rather ask the user back than make wild guesses on what might be the most "plausible" referent. This is of course not possible when running text is analyzed in a "batch mode", and no user is there to be asked for clarification.Mutual knowledge (see e.g. Clark and Marshall (1981) for a discussion) determines the rules for introducing and referencing individuals in the discourse.In the context of database interaction we assume the mutual knowledge to consist initially of:the set of proper names in the database, -the predicates whose extensions are in the database, -the "common sense" relationships between and properties of these predicates. It will be part of the design of a database to establish what these "common sense" relationships and properties are,.e.g, whether it is generally known to the user community, whether "capital" expresses a one-one relation. Each question-answer pair occurring in the discourse is added to the stock of mutual knowledge.It is a pragmatic principle of pronominalization that only mutual knowledge may be used to determine the referent of a pronoun on semantic grounds, and hence it may be legal to use the same sentence containing a pronoun where earlier in the discourse it was illegal, because the mutual knowledge has increased in the meantime.What the topic of a discourse is, which of the entities mentioned in it are in focus, is reflected in the syntactic structure of sentences.This has been observed for a long time.It has also often been observed that discourse topic and focus have an effect on pronominalization where morphological, configurational, and semantic rules fail to determine a single Candidate for reference.However, it has not been possible yet to formulate precise rules explaining this phenomenon. We have the impression that such rules cannot be absolutely strict rules, but are of a preferential nature. We have developed a set of such rules and tested them against a corpus of text containing some 600 pronoun occurrences, and have found them to work remarkably well.Similar tests (with a similar set of rules) have been conducted by Hofmann (1976).In the sequel we formulate and discuss our list of rules. Their ordering corresponds to the order in which they have to be applied.Noun phrases within the sentence containing the pronoun are preferred over noun phrases in previous or succeeding sentences.Consider the sequence (23) What country joined the EC after 1980?Greece. (24) What country consumes the wine it produces?One could argue that "Greece" is just as probably the intended referent of "it" in this case as the bound interpretation and that hence the use of "it" should be rejected as inappropriate.However, there is no way to avoid the "it", if the bound variable interpretation is intended, and one can use this as a ground to rule out the interpretation where "it" refers to "Greece".Noun phrases in sentences before the sentence containing the pronoun are preferred over noun phrases in more distant sentences. This criterion is very important to limit the search for possible discourse referents.Pronouns are preferred over full noun phrases.This rule is found in many systems dealing with anaphora.One can motivate it by saying that pronominalization establishes an entity as a theme which is then maintained until the chain of pronouns is broken by a sentence not containing a suitable pronoun. For an example consider:(25) W:lat =s the area of Austria! (26) What is its capital? (27) What is its population? P3. Noun ~hrases in a matrix clause or phrase are preferred over noun phrases in embedded clauses or phrases.Noun phrases in a matrix clause are preferred over noun phrases in embedde~ clauses.Example:(28) What country imports a product that Spain produces? -Denmark. (29) What does it export?Here "it" has to refer to the individual satisfying "what country", not to "Spain" which occurs in an embedded clause.P3b. Head nouns are preferred over noun complements.Example:(30) What is the capital of Austria? -Vienna. (31) What is its population? "Vienna", not "Austria" becomes the referent of "its", and the argument is analogous to that for P3a.Subject noun phrases are preferred over non-subject noun phrases.In declarative contexts, this rule works quite well. It corresponds essentially to the focus rule of Sidher (1981) .In a question-answering situation it is hardly applicable, since especially in wh-questions subject position and word order, which both play a role, tend to interfere.We therefore tend to not use this rule, but rather to let the system ask back in cases where it would apply.For illustration consider the following examples: P6. Noun phrases preceding the pronoun are preferred over noun phrases succeeding the pronoun (or: anaphora is preferred over kataphora).We now outline a procedure for "resolving" pronouns in the framework of the USL system and DRT.Let M = <U, Con> be the DRS representing the mutual knowledge, in particular the past discourse. Let K(s) be the DRS representing the current sentence s and let p be a pronoun occurring in s for which an appropriate discourse referent has to be found.Let U be the set of discourse referents a(p) accessible to p according to the accessibility relation given in sec. 2.2 Let further c be a function that a;)plies to U a(p) all the morphological, syntactic, and semantic criteria, given above and yields a set Uc(p) as result. Now three cases have to be distinguished:1. Uc(p) is empty.In this case the use of p was inappropriate. 2. Card(Uc(p)) is 1. In this case a referent for p has been uniquely determined, p is replaced by it in the DRS, and the procedure is finished. 3. Card(Uc(p)) is greater than 1. In this case the preference rules are applied.Let p be a function that applies to Uc(p) if the cardinality of Uc(p). is greater than 1 all the preference rules given above in the order indicated there yielding the result Up. Card(Up) can never be 0, hence two cases are possible, either the cardinality is 1, then a referent has been uniquely determined and the pronoun p can be eliminated in K, or the cardinality is greater than 1, and then the use of p was inappropriate.It can be inferred from the formulation of the pronominalization rules given above, what morphological and syntactic information has to be stored with the discourse referents in the DRSs, and what semantic information has to be accessible from the schema of the database to enable the application of the functions c and p. Hence, we will not spell out these details here. open questions and conclusions: Many well-known and puzzling cases have not been addressed here, among them plural anaphora, so-called pronouns of laziness, one pronominalization, to name just a few.We have not said anything about phenomena such as discourse topic, focus, or coherence and their influence on anaphora. Their effects are captured in our preference rules to some degree, but no one can precisely say how. Inspire of claims to the contrary, we believe that much work is still required, before these notions can be used effectively in natural language processing. : 1 Overview: Relation to previous work One of the main obstacles of the automated processing of natural language sentences (and a forteriori texts) is the proper treatment of anaphoric relations.Even though there is a plethora of research attempting to specify (both on the theoretical level as well as in connection with implementations)"strategies" for "pronoun resolution", it is fair to say a) that no uniform and comprehensive treatment of anaphora has yet been attained b) that surprisingly little effort has been spent in applying the results of research in linguistics and formal semantics in actual implemented systems.A quick glance at Hirst (1981) will confirm that there is a large gap between the kinds of theoretical issues and puzzling cases that have been considered on the one hand in the setting of computational linguistics and on the other in recent semantically oriented approaches to the formal analysis of natural languages.One of the main aims of this paper is to bridge this gap by combining recent efforts forthcoming in formal semantics (based on Montague grammar and Discourse Representation Theory) with existing and relatively comprehensive grammars of German and English constructed in connection with the User Specialty Languages (USL) system, a natural language database query system briefly described below.We have drawn extensively --as far as insights, examples, puzzles and adequacy conditions are concerned --on the various "variable binding" approaches to pronouns (e. 9, work in the Montague tradition, the illuminating discussion by Evans (1980) and Webber (1978) , as well as recent transformational accounts).Our approach has however been most deeply influenced by those who have (like Smaby (1979) , (1981) and Kamp (1981) ) advocated dispensing with pronoun indexing on the one hand and by those (like Chastain (1973) , Evans (1980) , and Kamp (1981) ) who have emphasized the "referential" function of certain uses of indefinite noun phrases. Appendix: limiting ourselves to the relatively well-defined communicative situation of database interaction, we have been able to state precisely, what rules are applicable in the fragment of language we are dealing with. We are currently working on the analysis of running texts, but again in a well-delineated domain, and we hope to be able to extend our theory on the basis of the experience gained.We are convinced that serious progress in the understanding of anaphora and of discourse phenomena in general is only possible through a careful control of the environment, and on a solid syntactic and semantic foundation.
null
null
null
null
{ "paperhash": [ "zoeppritz|syntax_for_german_in_the_user_specialty_languages_system", "sidner|focusing_for_interpretation_of_pronouns", "lehmann|interpretation_of_natural_language_in_an_information_system", "astrahan|system_r:_relational_approach_to_database_management", "grosz|the_representation_and_use_of_focus_in_dialogue_understanding." ], "title": [ "Syntax for German in the user specialty languages system", "Focusing for Interpretation of Pronouns", "Interpretation of Natural Language in an Information System", "System R: relational approach to database management", "The representation and use of focus in dialogue understanding." ], "abstract": [ "Continuous work-up of moist, phosphorus-containing residues, particularly of residues which originate from the filtration of yellow phosphorus or phosphorus-containing effluent water. To this end, intimate contact is produced, in an electrical precipitation zone adapted to free phosphorus furnace gas from dust, between the moist phosphorus-containing residue and hot dust originating from phosphorus furnace gas and covering the bottom portion of said electrical precipitation zone. Residue and dust are contacted in a quantitative ratio between 1 and 2. Resulting, substantially phosphorus-free residue is mechanically removed from the electrical precipitation zone together with dust precipitated from the phosphorus furnace gas, in the electrical precipitation zone. A phosphorus/water-mixture produced by vaporization of the moist residue is passed through the electrical precipitation zone and the mixture is introduced jointly with dust-free phosphorus furnace gas coming from the precipitation zone into a condensation zone placed downstream of the precipitation zone, wherein the mixture and dust-free phosphorus furnace gas are precipitated.", "Recent studies in both artificial intelligence and linguistics have demonstrated the need for a theory of the comprehension of anaphoric expressions, a theory that accounts for the role of syntactic and semantic effects, as well as inferential knowledge in explaining how anaphors are understood. In this paper a new approach, based on a theory of the process of focusing on parts of the discourse, is used to explain the interpretation of anaphors. The concept of a speaker's foci is defined, and their use is demonstrated in choosing the interpretations of personal pronouns. The rules for choosing interpretations are stated within a framework that shows: how to control search in inferring by a new method called constraint checking; how to take advantage of syntactic, semantic and discourse constraints on interpretation; and how to generalize the treatment of personal pronouns, to serve as a framework for the theory of interpretation for all anaphors.", "This paper discusses some of the linguistic problems encountered during the development of the User Specialty Languages (USL) system, an information system that accepts a subset of German or English as input for query, analysis, and updating of data. The system is regarded as a model for portions of natural language that are relevant to interactions with a data base. The model provides insight into the functioning of language and the linguistic behavior of users who must communicate with a machine in order to obtain information. The aim of application independence made it necessary to approach many problems from a different angle than in most comparable systems. Rather than a full treatment of the linguistic capacity of the system, details of phenomena such as time handling, coordination, quantification, and possessive pronouns are presented. The solutions that have been implemented are described, and open questions are pointed out.", "System R is a database management system which provides a high level relational data interface. The systems provides a high level of data independence by isolating the end user as much as possible from underlying storage structures. The system permits definition of a variety of relational views on common underlying data. Data control features are provided, including authorization, integrity assertions, triggered transactions, a logging and recovery subsystem, and facilities for maintaining data consistency in a shared-update environment.\nThis paper contains a description of the overall architecture and design of the system. At the present time the system is being implemented and the design evaluated. We emphasize that System R is a vehicle for research in database architecture, and is not planned as a product.", "Abstract : This report develops a representation of focus of attention thatcircumscribes discourse contexts within a general representation ofknowledge. Focus of attention is essential to any comprehension processbecause what and how a person understands is strongly influenced bywhere his attention is directed at a given moment. To formalize thenotion of focus, the need for and the use of focus mechanisms areconsidered from the standpoint of building a computer system that canparticipate in a natural language dialogue with a ser, Two ranges offocus, global and immediate, are investigated, and representations forincorporating them in a computer system are developed.The global focus in which an utterance is interpreted is determinedby the total discourse and situational setting of the utterance. Itinfluences what is talked about, how different concepts are introduced,and how concepts are referenced. To encode global focuscomputationally, a representation is developed that highlights thoseitems that are relevant at a given place in a dialogue. The underlyingknowledge representation is segmented into subunits, called focusspaces, that contain those items that are in the focus of attention of adialogue participant during a particular part of the dialogue.Mechanisms are required for updating the focus representation,because, as a dialogue progresses, the objects and actions that arerelevant to the conversation, and therefore in the participants' focusof attention, change. Procedures are described for deciding when andhow to shift focus in task-oriented dialogues, i.e., in dialogues inwhich the participants are cooperating in a shared task. Theseprocedures are guided by a representation of the task being performed.The ability to represent focus of attention in a languageunderstanding system results in a new approach to an important problemin discourse comprehension -- the identification of the referents ofdefinite noun phrases." ], "authors": [ { "name": [ "M. Zoeppritz" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "C. Sidner" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "H. Lehmann" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. Astrahan", "M. Blasgen", "D. Chamberlin", "K. Eswaran", "J. Gray", "Patricia P. Griffiths", "W. F. King", "R. Lorie", "Paul R. McJones", "J. W. Mehl", "G. R. Putzolu", "I. Traiger", "B. W. Wade", "V. Watson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "B. Grosz" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null ], "s2_corpus_id": [ "13284858", "16805751", "12706597", "11840729", "61114426" ], "intents": [ [ "background", "methodology" ], [], [ "background" ], [], [] ], "isInfluential": [ false, false, true, false, false ] }
Problem: The paper addresses the challenge of automated processing of natural language sentences, specifically focusing on the proper treatment of anaphoric relations in the context of database interaction. Solution: The hypothesis proposed is that by combining the User Specialty Languages system and Discourse Representation Theory, strict and preferential rules for pronominalization can be formulated, allowing for the proper assignment of referents to pronouns in the context of database querying interactions.
497
0.030181
null
null
null
null
null
null
null
null
7892e1619046b154916ababbb38da6fddac1820b
1786876
null
Case Role Filling as a Side Effect of Visual Search
This paper addresses the problem Of
{ "name": [ "Marburger, Heinz and", "Wahlster, Wolfgang" ], "affiliation": [ null, null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
17
12
null
(11]) like (la) rather than with a simple Yes.Are you going to travel this summer? (la) Yes, to Sicily.In the absence of special information about the previous course of the dialog or the intentions of the questioner (the unmarked case) an answer like (la) seems more appropriate than (Ib) or (Ic).(Ib) Yes, with an old school friend. (Ic) Yes, by plane.OF course, there are numerous dialog situatlons in which (lb) os" (lc) could be generated as a communicatively adequate response on the basis of a par. t±cular partner model. But it still must b~ asked why in dialogs of the type 'information ,upply' the unmarked response takes the form (la) ~nd not (lh) or (lc).In this paper we will present the results of a computational study of this problem for the domain answer, while in (3), where only the obligatory deep case slots are filled, an extended response like (3a) can be expected.(2) Did you break the window with your slingshot yesterday? (2a) Yes.(3)Did you break the window? (3a) Yes, with my slingshot.Since not every optional deep case of a given verb unspecified in the question is suitable for an unmarked extended response (e.g.(la)-(lc)) we may define the problem more precisely by asking which of the deep case slots unspecified in the question are to be chosen as the unmarked values.'locomotion verbs' let us consider questions (4) and (5), which refer to a visually present world of discourse. In each case perceptual processes are assumed as a prerequisite for the answer.Which vehicle stopped? &a) The bus, on Hartungstreet. 4b) The bus, because the driver stepped on the brake.Did the bus turn off? 5a) Yes, from Hartungstreet onto Schlueterstteet. 5b) Yes, together with the taxi cab. In order to verify a stop-event it is necessary to determine the end point of the motion (Cf. (4a)) but not the cause (cf. (4b)). For a turn-off event a change of direction between source and goal must be established (cf. (Sa)). It is not essential to determine whether other objects make this change of direction at the same time (cf. (Sb)).Hence case role filling for the construction of an extended response can be regarded as a side effect of the visual, search necessary to answer the question.This also appears plausible when seen in the light of the beliefs that the questioner imputes to the answerer. fulfilled. An important benefit of the objectoriented style is that it lends itself to a particularly simple and lucid kind of modularity. fig. ? ). Since in the example the verb 'to go by' has a case frame the second strategy is applied. After an access to the case-frame lexicon the case frame is constructed. This case frame is used to guide the parsing in the following manner: The al@orithm first attempts to recognize those syntactic con- (See section 3.4.2 for how this process functions,)
null
null
A verification of an event GO BY is possible only for TRUCK2. The additional ~nformation extracted durin 9 the process of visual search -the specific location of the event -is recorded in the locative slot.During the formation of the result of the evaluation, the system, guided by general heuristics, decides whether the additional detail will cause too ~reat a complexity in the answer or not [11] . In this case the complexity is suitable and the location will be mentioned in the answer. The word 'which' is defined as quantifier that causes a description of a set of objects to be returned (instead of a truth value). Thus the set of reference objects for which the proposition in question could be verified, i.e. TRUCK2, is substituted for the noun phrase 'which trucks'.
We have attempted to show that case role filling for the construction of an unmarked extended response can be regarded as a side effect of the visual search necessary to answer questions referring to a visually present domain of discourse. A new method for the representation of the referential semantics associated with locomotion verbs has been presented in the framework of objectoriented programming based on the Fla.vor system. The approach presented has been useful in extending the communicative capabilities of the dialog system HAM-AN$ as an interface to a vision system.
Main paper: an example of the processing of an utterance: A verification of an event GO BY is possible only for TRUCK2. The additional ~nformation extracted durin 9 the process of visual search -the specific location of the event -is recorded in the locative slot.During the formation of the result of the evaluation, the system, guided by general heuristics, decides whether the additional detail will cause too ~reat a complexity in the answer or not [11] . In this case the complexity is suitable and the location will be mentioned in the answer. The word 'which' is defined as quantifier that causes a description of a set of objects to be returned (instead of a truth value). Thus the set of reference objects for which the proposition in question could be verified, i.e. TRUCK2, is substituted for the noun phrase 'which trucks'. concluszon: We have attempted to show that case role filling for the construction of an unmarked extended response can be regarded as a side effect of the visual search necessary to answer questions referring to a visually present domain of discourse. A new method for the representation of the referential semantics associated with locomotion verbs has been presented in the framework of objectoriented programming based on the Fla.vor system. The approach presented has been useful in extending the communicative capabilities of the dialog system HAM-AN$ as an interface to a vision system. : (11]) like (la) rather than with a simple Yes.Are you going to travel this summer? (la) Yes, to Sicily.In the absence of special information about the previous course of the dialog or the intentions of the questioner (the unmarked case) an answer like (la) seems more appropriate than (Ib) or (Ic).(Ib) Yes, with an old school friend. (Ic) Yes, by plane.OF course, there are numerous dialog situatlons in which (lb) os" (lc) could be generated as a communicatively adequate response on the basis of a par. t±cular partner model. But it still must b~ asked why in dialogs of the type 'information ,upply' the unmarked response takes the form (la) ~nd not (lh) or (lc).In this paper we will present the results of a computational study of this problem for the domain answer, while in (3), where only the obligatory deep case slots are filled, an extended response like (3a) can be expected.(2) Did you break the window with your slingshot yesterday? (2a) Yes.(3)Did you break the window? (3a) Yes, with my slingshot.Since not every optional deep case of a given verb unspecified in the question is suitable for an unmarked extended response (e.g.(la)-(lc)) we may define the problem more precisely by asking which of the deep case slots unspecified in the question are to be chosen as the unmarked values.'locomotion verbs' let us consider questions (4) and (5), which refer to a visually present world of discourse. In each case perceptual processes are assumed as a prerequisite for the answer.Which vehicle stopped? &a) The bus, on Hartungstreet. 4b) The bus, because the driver stepped on the brake.Did the bus turn off? 5a) Yes, from Hartungstreet onto Schlueterstteet. 5b) Yes, together with the taxi cab. In order to verify a stop-event it is necessary to determine the end point of the motion (Cf. (4a)) but not the cause (cf. (4b)). For a turn-off event a change of direction between source and goal must be established (cf. (Sa)). It is not essential to determine whether other objects make this change of direction at the same time (cf. (Sb)).Hence case role filling for the construction of an extended response can be regarded as a side effect of the visual, search necessary to answer the question.This also appears plausible when seen in the light of the beliefs that the questioner imputes to the answerer. fulfilled. An important benefit of the objectoriented style is that it lends itself to a particularly simple and lucid kind of modularity. fig. ? ). Since in the example the verb 'to go by' has a case frame the second strategy is applied. After an access to the case-frame lexicon the case frame is constructed. This case frame is used to guide the parsing in the following manner: The al@orithm first attempts to recognize those syntactic con- (See section 3.4.2 for how this process functions,) Appendix:
null
null
null
null
{ "paperhash": [ "wahlster|over-answering_yes-no_questions:_extended_responses_in_a_nl_interface_to_a_vision_system", "hoeppner|beyond_domain-independence:_experience_with_the_development_of_a_german_language_access_system_to_highly_diverse_background_systems", "jameson|user_modelling_in_anaphora_generation:_ellipsis_and_definite_description", "weinreb|the_lisp_machine_manual", "roberts|the_frl_manual" ], "title": [ "Over-Answering Yes-No Questions: Extended Responses in a NL Interface to a Vision System", "Beyond Domain-Independence: Experience With the Development of a German Language Access System to Highly Diverse Background Systems", "User Modelling in Anaphora Generation: Ellipsis and Definite Description", "The Lisp Machine manual", "The FRL Manual" ], "abstract": [ "This paper addresses the problem of overanswering yes-no questions, i.e. of generating extended responses that provide additional information to yes-no questions that pragmatically must be interpreted as wh-questions. Although the general notion of extended responses has already been explored, our paper reports on the first attempt to build a NL system able to elaborate on a response as a result of anticipating obvious follow-up questions, in particular by providing additional case role fillers, by using more specific quantifiers and by generating partial answers to both parts of questions containing coordinating conjunctions. As a further innovation, the system explicitly deals with the informativeness-simplicity tradeoff when generating extended responses. We describe both an efficient implementation of the proposed methods, which use message passing as realized by the FLAVOR mechanism and the extensive linguistic knowledge in corporated in the verbalization component. The structure of the implemented NL generation component is illustrated using a detailed example of the systems\"s performance as an interface to an image understanding system.", "For natural language dialog systems, going beyond domain independence means the attempt to create a core system that can serve as a basis for interfaces to various application classes that differ not only with respect to the domain of discourse but also with respect to dialog type, user type, intended system behavior, and background system. In the design and implementation of HAM ANS. which is presently operational as an interface to an expert system, a vision system and a data base system, we have shown that going beyond domain independence is possible. HAM-ANS is a large natural language dialog system with both considerable depth and breadth, which accepts typed input in colloquial German and produces typed German responses quickly enough to make it practical for real-time interaction. One goal of this paper is to report on the lessons learned during the realization of the system HAM-ANS. This paper introduces the overall structure of the system's processing units and knowledge sources. In addition we describe some of the innovative features concerning the strategy of semantic interpretation.", "This paper shows how user modelling can improve the anaphoric utterances generated by a dialogue system. Two kinds of anaphora are examined: contextual ellipsis and the anaphoric use of singular definite noun phrases. In connection with ellipsis generation, anticipation of the way in which the user would be likely to reconstruct a given utterance can help to ensure that the system's utterances are not so brief as to be ambiguous or misleading. When generating noun phrases to characterize specific objects with which the user is not familiar, the system may take into account the existential assumptions, domain-related desires, and referential beliefs ascribed to the partner. These applications of user modelling are illustrated as realized in the dialogue system HAM-ANS, and some possible generalizations and extensions of the strategies described are discussed.", "This 471-page, softcover manual describes the programming language and software environment of the Lisp Machine developed at M.I.T.'s Artificial Intelligence Laboratory over the past 8 years. The Lisp Machine is the result of a successful experiment in computer science: a distributed computing system consisting of a network of powerful 32-bit personal computers, implemented with custom hardware and software as complete, interactive graphical workstations. Each machine consists of a 32-bit computer with 64 megabytes of virtual memory, 1 to 16 megabytes of main memory, 80 megabytes (or larger) disk, 800x900 graphics display (color optional), mouse, keyboard, speaker and 4 million bit/second local network interface, which allows connections to other Lisp Machines, primers and file servers.", "Abstract : The Frame Representation Language (FRL) is described. FRL is an adjunct to LISP which implements several representation techniques suggested by Minsky's concept of a frame: defaults, constraints, inheritance, procedural attachment, and annotation. (Author)" ], "authors": [ { "name": [ "W. Wahlster", "H. Marburger", "A. Jameson", "Stephan Busemann" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. Hoeppner", "T. Christaller", "H. Marburger", "K. Morik", "Bernhard Nebel", "Mike O'Leary", "W. Wahlster" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "A. Jameson", "W. Wahlster" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Weinreb", "D. A. Moon" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. B. Roberts", "I. Goldstein" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null ], "s2_corpus_id": [ "1787204", "6167161", "5267882", "6160639", "61042072" ], "intents": [ [ "methodology", "background" ], [ "methodology" ], [ "background" ], [], [] ], "isInfluential": [ true, false, false, false, false ] }
null
497
0.024145
null
null
null
null
null
null
null
null
93d40994e97efd441e0ba441d8e9a8a237dbea00
11417224
null
Towards Better Understanding of Anaphora
This paper presents a syntactical method of interpreting pronouns in Polish, Using the surface structure of the sentence as well as grammatical and inflexional inlormation accessible during syntactic analysis, an area of reference is marked out for each personal and possessive pronoun. This area consists of a few internal areas inside the current sentence and an external area, i.e. the part of the text preceding it. In order to determine that area of reference several syntactic sentence-level restrictions on anaphora interpretation are formulated. Next, when looking at the area of pronoun's reference, all NPs which number--gender agree with the pronoun can be selected and this way the set of surface referents ol each pronoun can be created. It can be used as data for further semantic analysis.
{ "name": [ "Dunin-Keplicz, Barbara" ], "affiliation": [ null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
11
6
null
Next, when looking at the area of pronoun's reference, all NPs which number--gender agree with the pronoun can be selected and this way the set of surface referents ol each pronoun can be created. It can be used as data for further semantic analysis.is one of the central concepts of any linguistic theory. In recent research into anaphora the term "reference" has been used in three different senses ( Szwedek, 1981) :(a) as a relation between the name and the thing named (Hall Partee, 1978) (b) as an association between noun phrases and mental entities in the language user's (Nash-%~ebber, 1978) (c) as an association between the occurrence of phrases in the text (Reinhart, 1981) However the reference is understood, irl order to interpret correctly anaphora on the semantic level ((a) and (b)), first a stage (C) is necessary.in this paper I have taken the point of vie~ presented under (c). i shall discuss the problem o~ onaphora in Polish ser Atences. rvly altentioF, is focused on personal ond possessive pronouns expticitely occurring in the text and moreover on zero pronouns, i.e. ellipsis of NP in the subject position, specific for Slavonic languages.purpose in the description of regularities of the reference in the Polish language. I shall express them by defining the area of pronoun's references, i.e. those regions of the text where its antecedents should De found, q hese surface referents will be selected from among NPs occurring in the sentence.The research on anaphora made for English has led to the formulation of some structural rules using such relations as command, c-command and precede-and-command (Reinhart, 3.981). I have been searching for analogous rules for Polish. But two essential differences have to be considered:(i)grammatical and morphological properties of Polish and English; (ii) different grammatical traditions.For English the rules concernig the coreference of entities were forrrulated on the basis of generative-transformational grammar. For Polish the first precise description of Polish syntax was formulated only recently by Szpakowicz, who based his work on the framework created by Saloni (Saloni, 1976; Saloni and Swidzinski, 1981) . It is a kind of in,mediate-constituent grammar; the grammatical categories (case, ~ender, etc) are applied not only to single words, but also to compound phrases. In my present ~vork I have limited my attention to the subset of Polish described by Szpakowicz (Szpako~Jvicz, 1983) .Folish is a highly inflexionat language and this fact has many and varied consequences. Surface referents of the pronoun will be selected from among those NPs which number--gender agree with the pronoun. Strictly speaking, the grammatical categories of the pronoun should be compatible with the categories of the NP, but in cases of neutralization they cannot be fully determined.My method of determining the areas of pronoun's reference is a syntachc one, because it is based on morphological and syntactical properties of the Polish language. I assume the availability of the surface structure of the sentence as well as grammatical and inflexional information accessible during a syntactic analysis. I detiberately do not make use of any semantic information, trying to get the most out of grammar, ri'he feature I intend t O provide is a complete definition of the area of pronoun's reference.A.Internal and external areas of referenceIn the process of determining the surface referents of the pronoun, first the area of its reference should be marked out. This area, i.e. those regions of the text, where its antecedents should be found, is usually made up of several internal reference arehsp i.e. the appropriate bits of the current sentence, and an external area, the part of the text preceding the current sentence. The list of internal areas depends on the syntactic position of the pronoun in the sentence. q'o determine these areas it is necessary to formulate sentence-level anaphora restrictions for Polish.. These rules will determine the conditions of both obligatory coreference and 0bii~atory non-coreference of entities. Thus we have two situations to consider: (i) in the case of obligatory coreference one internal area of reference containing the appropriate referent should be marked out; (ii) in the case of obligatory" non-coreference the elements which are forbidden as surface referents of the pronoun should be excluded from the internal area. The coreference of entities which is qualified on the basis of some other premises will be called admissible coreference.At our disposal we have a multileveled, hierarchic surface structure of the sentence. Generally, it seems that internal areas can be identified with the constituents on the hi~hest level: subject, objects, modifiers, regardless of their syntactic realization. Strictly speaking, noun as well as NP or any sentential structures can be instances of internal areas of reference. The partitioning of sentence (i) illustrates i%:(i) "(Ewa i Piotr) poszli (do niego) (z dziewczynq, kt6r~% w{a~nie spotkali)"."Eva and Peter went to him with a girl which just fret".[3.Rules ccncernin~ coreference of entities in PolishThe following rules of excluding the coreference of entities concern a level deeper than that on the surface, because they refer to syntactical functions of phrases in the sentence. The first rule presents the problem of coreference of the subject and other nominal groups, i.e. objects and nominal trodifiers, in short called objects. It concerns reflexive pronouns, so it should be noted first that they differ from those in English, eg.: -possessive pronoun "sw6j" may have one of the following meanings: his, her, its. "Suddenly, near John, saw a snake" mast 9"Nagle, obok niego, ~ zobaczy~ w@za" "Suddenly, near him, saw a snake" masc (10) "Nagle, obok siebie, zobaczy{ w~-a" "Suddenly, near himself, saw a snake" (ii) "Nagle, obok siebie, Jn masc --zobaczy~ w~za" "Suddenly, near himself, he saw a snake"In examples (10) and (13.) the reflexive pronoun has appeared. These are the only two cases in which the coreference with the subject of the main sentence is permitted and even obligator'y. Such an interpretation is correct irrespective of the position of PP in the sentence, i.e. it does not depend on whether this phrase precedes or follows the subject.The basic criterion of excluding coreference works as follows:(i)it is valid only for a simple clause, without blocking coreference between the elements of the main sentence and the constituents of embedded clauses; (ii) it is obligatory on every level of the sentence, i.e. it concerns all the sentence constructions irrespective of their position in the structure of the whole sentence.(12) to (14) illustrate this:12) "Piot"~ nie wiedzia~, czy'~ pdjdzie do kina""Peter did not know, whether would go to the movies" Jan spotka{ ch*opca, kt6ry eo dawno ni e"o d~ e c~z'ii ..... "4""John met a boy, who didn't visit him for lon~"The interpretation of reflexive pronouns is not so easy as the criterion R 1 suggests. These pronouns can be involved in various compound phrases which often are ambiguous. Especially infinitive phrases are hard to interpret. In order to do this correctly, an implicit agent which will be called further the deep subject, should be obtained. It often needs a few hypotheses to be formulated. Let us consider an example. The sentence:(15) "Jan kaza{ stuzqcemu umyd siq" can be translated in two ways which exactly (15.1) "John told (the sevant) (to wash him)" (15.2) "John told (the servant) (to wash himself )"In the infinitive phrase "umyd si@" ("to wash him" or "to wash himself") which is standing in the object position, the reflexive pronoun "si~" is coreferential with the deep subject of this phrase. Thus its interpretation has to be determined. Here we have two possibilities:(i) the previoux object-"servant"interpretation (15.1) (it) the subject of the main sentence -"John"- interpretation (15.2)One of them is the referent of the deep subject. And so we come to the next rule:(R 2) In order to interpret the infinitive phrase, the deep subject of the phrase has to be selected from among the previous object (if any) and the subject of the main sentence.
The next sentence-level restriction of anaphora interpretation regulates the problem of coreference of l'4Ps other than a subject, i.e. objects, between them.
null
"l~he next group of problems concerns the coreference of entities in a compound sentence, including the question of the subject. In a Polish sentence it needs not be explicit. Ellipsis of the I'~P in the subject position, often called "the elided subject", is a natural way of expressing "thematic cont,nu,ty' ' " and exemplifies an unaccented position in the sentence. On the other hand, the pronoun as the subject stands in syntactic opposition to the elided subject (zero pronoun) and exemplifies an accented position in the sentence. ~,'hile determining the antecedent of the subject of a simple sentence or a main clause in a compound sentence (explicit or implicit) we reach out to the external area of references. However, the basic criterion of excluding coreference is still valid. Peter"The interpretation of compound sentences is d~icult and sometimes leads to ambiguous results. The following rules concern mainly the coreference (or non-coreference) of elided subjects in co-ordinate and aubordinate clauses. In the case of co-ordinate clauses t~,o rules can be formulated:(R 4) I~or each two clauses in a sequence, if the elided subject is in the second clause, then the subject of the first clause should be extrapolated there (obliRatory coreference)."Piotr podszed~ do okna" (21)wsta~ od "Peter left the table and approached the window" (R 5) 5"or each two clauses in a sequence, the pronoun or zero pronoun subject in the first clause cannot be coreferential with the non-pronoun subject of the second clause (obligatory non-coreference).(22) ¢~ od/to~,~-piot~ podszed~ do okna""lie left the table and Peter approached the window"Interpreting subordinate clauses depends on the relative position of the main and the embedded clause.(R 6) If the embedded clause precedes the main clause and if both have elided subjects, these have to be coreferential (obligatory coreference).(zJ) Zanim 4~.~2z~>~ zgasi~ ~wiat~o" "Before leftmasc, turned Offmasc the light" (24) "Poniewa~ %~¢ zapyta~ o to""Because forgot , asked about it" masc masc (R ?) The elided subject in the embedded clause is a natural way of indicating the nearest candidate -the previous object (if it is there) or the subject of the main sentence (admissible coreference )."---"'--" --ze'*~ p6jdzie do (25) "Jan zapewni~ Plotra, kina" "~ __ -#'EQUATION"John promised Peter, that will go to the movies") The pronoun or zero pronoun subject in the main sentence can be coreferential with the non-pronoun subject of the embedded clause which precedes the main sentence (admissible coreference), but cannot be Coreferential with the non-pronoun subject of the embedded clause following the main sentence (obligatory non-coreference )."Zanin Jan w-y-szed{, ~ zgasi{ ~wiat{o""Before John left, turned off the light" masc l "~ z z~gasi{ ~wiat~o, zanim J aan wyszed{" { "Turned off the light, before John left" masc ,, O.~n-~ni e / __ wiedzia~, czy ~iot.r. 156jdzie do kina""He didn't know, whether Peter will go to the movies"Relative clauses are quite easy to interpret in Polish. Either their subject or object is replaced with pronoun "which" or "what" or their equivalents (only such types of relative clauses are described in the Szpakowicz grammar). These pronouns always indicate the NP next to which they stand and inherit gender, number and person from it. rfhus the obligatory coreference of relative pronoun and this NP is determined. Let us have a look at some examples: the list of internal areas of reference or the external area, both with certain restrictions on coreference, are determined. Next, more detailed results can be obtained. 1~'hen looking at the internal areas, all NPs which number--gender agree with the pronoun should be selected and a list of surface referents of pronoun together with a list of elements blocked as the referents can be drawn up. If no internal areas are marked out, the external area with the list of blocked elements is the result of the method presented here. Similary, while only admissible coreference is determined, the external area is marked out too and the list of blocked elements remains valid. On the other hand the obligatory coreference makes it possible to define the appropriate antecedent of the pronoun. The list of surface referents may be ordered by assunzin~ the specific method of traversing the parsin~ tree. I expext, that as for English, recency understood as a physical distance between the pronoun and its antecedent can be the first approximation of the probability.As expected the results of the method applied here need semantic verification. But at the same time they are a reasonable data for further semantic analysis. Data arrived at in this way make this process much easier. it seems that a similar procedure can be carried out for other languages. Full grammatical information should be used wherever it can simplify such complex process as the semantic analysis.Bonnie Lynn (1978) . A Formal Approach to Discourse Anaphora. Phl) thesis, Harvard University PARTEE, Barbara Hall (1978)
null
Main paper: excludin~ the coreference between objects: The next sentence-level restriction of anaphora interpretation regulates the problem of coreference of l'4Ps other than a subject, i.e. objects, between them. rules of interpretinq compound sentences: "l~he next group of problems concerns the coreference of entities in a compound sentence, including the question of the subject. In a Polish sentence it needs not be explicit. Ellipsis of the I'~P in the subject position, often called "the elided subject", is a natural way of expressing "thematic cont,nu,ty' ' " and exemplifies an unaccented position in the sentence. On the other hand, the pronoun as the subject stands in syntactic opposition to the elided subject (zero pronoun) and exemplifies an accented position in the sentence. ~,'hile determining the antecedent of the subject of a simple sentence or a main clause in a compound sentence (explicit or implicit) we reach out to the external area of references. However, the basic criterion of excluding coreference is still valid. Peter"The interpretation of compound sentences is d~icult and sometimes leads to ambiguous results. The following rules concern mainly the coreference (or non-coreference) of elided subjects in co-ordinate and aubordinate clauses. In the case of co-ordinate clauses t~,o rules can be formulated:(R 4) I~or each two clauses in a sequence, if the elided subject is in the second clause, then the subject of the first clause should be extrapolated there (obliRatory coreference)."Piotr podszed~ do okna" (21)wsta~ od "Peter left the table and approached the window" (R 5) 5"or each two clauses in a sequence, the pronoun or zero pronoun subject in the first clause cannot be coreferential with the non-pronoun subject of the second clause (obligatory non-coreference).(22) ¢~ od/to~,~-piot~ podszed~ do okna""lie left the table and Peter approached the window"Interpreting subordinate clauses depends on the relative position of the main and the embedded clause.(R 6) If the embedded clause precedes the main clause and if both have elided subjects, these have to be coreferential (obligatory coreference).(zJ) Zanim 4~.~2z~>~ zgasi~ ~wiat~o" "Before leftmasc, turned Offmasc the light" (24) "Poniewa~ %~¢ zapyta~ o to""Because forgot , asked about it" masc masc (R ?) The elided subject in the embedded clause is a natural way of indicating the nearest candidate -the previous object (if it is there) or the subject of the main sentence (admissible coreference )."---"'--" --ze'*~ p6jdzie do (25) "Jan zapewni~ Plotra, kina" "~ __ -#'EQUATION"John promised Peter, that will go to the movies") The pronoun or zero pronoun subject in the main sentence can be coreferential with the non-pronoun subject of the embedded clause which precedes the main sentence (admissible coreference), but cannot be Coreferential with the non-pronoun subject of the embedded clause following the main sentence (obligatory non-coreference )."Zanin Jan w-y-szed{, ~ zgasi{ ~wiat{o""Before John left, turned off the light" masc l "~ z z~gasi{ ~wiat~o, zanim J aan wyszed{" { "Turned off the light, before John left" masc ,, O.~n-~ni e / __ wiedzia~, czy ~iot.r. 156jdzie do kina""He didn't know, whether Peter will go to the movies" interpretation of relative clauses: Relative clauses are quite easy to interpret in Polish. Either their subject or object is replaced with pronoun "which" or "what" or their equivalents (only such types of relative clauses are described in the Szpakowicz grammar). These pronouns always indicate the NP next to which they stand and inherit gender, number and person from it. rfhus the obligatory coreference of relative pronoun and this NP is determined. Let us have a look at some examples: the list of internal areas of reference or the external area, both with certain restrictions on coreference, are determined. Next, more detailed results can be obtained. 1~'hen looking at the internal areas, all NPs which number--gender agree with the pronoun should be selected and a list of surface referents of pronoun together with a list of elements blocked as the referents can be drawn up. If no internal areas are marked out, the external area with the list of blocked elements is the result of the method presented here. Similary, while only admissible coreference is determined, the external area is marked out too and the list of blocked elements remains valid. On the other hand the obligatory coreference makes it possible to define the appropriate antecedent of the pronoun. The list of surface referents may be ordered by assunzin~ the specific method of traversing the parsin~ tree. I expext, that as for English, recency understood as a physical distance between the pronoun and its antecedent can be the first approximation of the probability.As expected the results of the method applied here need semantic verification. But at the same time they are a reasonable data for further semantic analysis. Data arrived at in this way make this process much easier. it seems that a similar procedure can be carried out for other languages. Full grammatical information should be used wherever it can simplify such complex process as the semantic analysis.Bonnie Lynn (1978) . A Formal Approach to Discourse Anaphora. Phl) thesis, Harvard University PARTEE, Barbara Hall (1978) : Next, when looking at the area of pronoun's reference, all NPs which number--gender agree with the pronoun can be selected and this way the set of surface referents ol each pronoun can be created. It can be used as data for further semantic analysis.is one of the central concepts of any linguistic theory. In recent research into anaphora the term "reference" has been used in three different senses ( Szwedek, 1981) :(a) as a relation between the name and the thing named (Hall Partee, 1978) (b) as an association between noun phrases and mental entities in the language user's (Nash-%~ebber, 1978) (c) as an association between the occurrence of phrases in the text (Reinhart, 1981) However the reference is understood, irl order to interpret correctly anaphora on the semantic level ((a) and (b)), first a stage (C) is necessary.in this paper I have taken the point of vie~ presented under (c). i shall discuss the problem o~ onaphora in Polish ser Atences. rvly altentioF, is focused on personal ond possessive pronouns expticitely occurring in the text and moreover on zero pronouns, i.e. ellipsis of NP in the subject position, specific for Slavonic languages.purpose in the description of regularities of the reference in the Polish language. I shall express them by defining the area of pronoun's references, i.e. those regions of the text where its antecedents should De found, q hese surface referents will be selected from among NPs occurring in the sentence.The research on anaphora made for English has led to the formulation of some structural rules using such relations as command, c-command and precede-and-command (Reinhart, 3.981). I have been searching for analogous rules for Polish. But two essential differences have to be considered:(i)grammatical and morphological properties of Polish and English; (ii) different grammatical traditions.For English the rules concernig the coreference of entities were forrrulated on the basis of generative-transformational grammar. For Polish the first precise description of Polish syntax was formulated only recently by Szpakowicz, who based his work on the framework created by Saloni (Saloni, 1976; Saloni and Swidzinski, 1981) . It is a kind of in,mediate-constituent grammar; the grammatical categories (case, ~ender, etc) are applied not only to single words, but also to compound phrases. In my present ~vork I have limited my attention to the subset of Polish described by Szpakowicz (Szpako~Jvicz, 1983) .Folish is a highly inflexionat language and this fact has many and varied consequences. Surface referents of the pronoun will be selected from among those NPs which number--gender agree with the pronoun. Strictly speaking, the grammatical categories of the pronoun should be compatible with the categories of the NP, but in cases of neutralization they cannot be fully determined.My method of determining the areas of pronoun's reference is a syntachc one, because it is based on morphological and syntactical properties of the Polish language. I assume the availability of the surface structure of the sentence as well as grammatical and inflexional information accessible during a syntactic analysis. I detiberately do not make use of any semantic information, trying to get the most out of grammar, ri'he feature I intend t O provide is a complete definition of the area of pronoun's reference.A.Internal and external areas of referenceIn the process of determining the surface referents of the pronoun, first the area of its reference should be marked out. This area, i.e. those regions of the text, where its antecedents should be found, is usually made up of several internal reference arehsp i.e. the appropriate bits of the current sentence, and an external area, the part of the text preceding the current sentence. The list of internal areas depends on the syntactic position of the pronoun in the sentence. q'o determine these areas it is necessary to formulate sentence-level anaphora restrictions for Polish.. These rules will determine the conditions of both obligatory coreference and 0bii~atory non-coreference of entities. Thus we have two situations to consider: (i) in the case of obligatory coreference one internal area of reference containing the appropriate referent should be marked out; (ii) in the case of obligatory" non-coreference the elements which are forbidden as surface referents of the pronoun should be excluded from the internal area. The coreference of entities which is qualified on the basis of some other premises will be called admissible coreference.At our disposal we have a multileveled, hierarchic surface structure of the sentence. Generally, it seems that internal areas can be identified with the constituents on the hi~hest level: subject, objects, modifiers, regardless of their syntactic realization. Strictly speaking, noun as well as NP or any sentential structures can be instances of internal areas of reference. The partitioning of sentence (i) illustrates i%:(i) "(Ewa i Piotr) poszli (do niego) (z dziewczynq, kt6r~% w{a~nie spotkali)"."Eva and Peter went to him with a girl which just fret".[3.Rules ccncernin~ coreference of entities in PolishThe following rules of excluding the coreference of entities concern a level deeper than that on the surface, because they refer to syntactical functions of phrases in the sentence. The first rule presents the problem of coreference of the subject and other nominal groups, i.e. objects and nominal trodifiers, in short called objects. It concerns reflexive pronouns, so it should be noted first that they differ from those in English, eg.: -possessive pronoun "sw6j" may have one of the following meanings: his, her, its. "Suddenly, near John, saw a snake" mast 9"Nagle, obok niego, ~ zobaczy~ w@za" "Suddenly, near him, saw a snake" masc (10) "Nagle, obok siebie, zobaczy{ w~-a" "Suddenly, near himself, saw a snake" (ii) "Nagle, obok siebie, Jn masc --zobaczy~ w~za" "Suddenly, near himself, he saw a snake"In examples (10) and (13.) the reflexive pronoun has appeared. These are the only two cases in which the coreference with the subject of the main sentence is permitted and even obligator'y. Such an interpretation is correct irrespective of the position of PP in the sentence, i.e. it does not depend on whether this phrase precedes or follows the subject.The basic criterion of excluding coreference works as follows:(i)it is valid only for a simple clause, without blocking coreference between the elements of the main sentence and the constituents of embedded clauses; (ii) it is obligatory on every level of the sentence, i.e. it concerns all the sentence constructions irrespective of their position in the structure of the whole sentence.(12) to (14) illustrate this:12) "Piot"~ nie wiedzia~, czy'~ pdjdzie do kina""Peter did not know, whether would go to the movies" Jan spotka{ ch*opca, kt6ry eo dawno ni e"o d~ e c~z'ii ..... "4""John met a boy, who didn't visit him for lon~"The interpretation of reflexive pronouns is not so easy as the criterion R 1 suggests. These pronouns can be involved in various compound phrases which often are ambiguous. Especially infinitive phrases are hard to interpret. In order to do this correctly, an implicit agent which will be called further the deep subject, should be obtained. It often needs a few hypotheses to be formulated. Let us consider an example. The sentence:(15) "Jan kaza{ stuzqcemu umyd siq" can be translated in two ways which exactly (15.1) "John told (the sevant) (to wash him)" (15.2) "John told (the servant) (to wash himself )"In the infinitive phrase "umyd si@" ("to wash him" or "to wash himself") which is standing in the object position, the reflexive pronoun "si~" is coreferential with the deep subject of this phrase. Thus its interpretation has to be determined. Here we have two possibilities:(i) the previoux object-"servant"interpretation (15.1) (it) the subject of the main sentence -"John"- interpretation (15.2)One of them is the referent of the deep subject. And so we come to the next rule:(R 2) In order to interpret the infinitive phrase, the deep subject of the phrase has to be selected from among the previous object (if any) and the subject of the main sentence. Appendix:
null
null
null
null
{ "paperhash": [ "hobbs|coherence_and_coreference" ], "title": [ "Coherence and Coreference" ], "abstract": [ "Coherence in conversations and in texts can be partially characterized by a set of coherence relations, motivated ultimately by the speaker's or writer's need to be understood. In this paper, formal definitions are given for several coherence relations, based on the operations of an inference system; that is, the relations between successive portions of a discourse are characterized in terms of the inferences that can be drawn from each. In analyzing a discourse, it is frequently the case that we would recognize it as coherent, in that it would satisfy the formal definition of some coherence relation, if only we could assume certain noun phrases to be coreferential. In such cases, we will simply assume the identity of the entities referred to, in what might be called a “petty conversational implicature,” thereby solving the coherence and coreference problems simultaneously. Three examples of different kinds of reference problems are presented. In each, it is shown how the coherence of the discourse can be recognized, and how the reference problems are solved, almost as a by-product, by means of these petty conversational implicatures." ], "authors": [ { "name": [ "Jerry R. Hobbs" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "45706253" ], "intents": [ [] ], "isInfluential": [ false ] }
null
497
0.012072
null
null
null
null
null
null
null
null
d553dfcd56a8b141cb979f2dfb35592cb20b18d6
23172329
null
Structure of Sentence and Inferencing in Question Answering
In the present paper we characterize in more detail some of the aspects of a question answering system using as its starting point the underlying structure of sentences (which with some approaches can be identified with the level of meaning or of logical form). First of all, the criteria are described that are used to identify the elementary units of under-
{ "name": [ "Hajicova, Eva and", "Sgall, Petr" ], "affiliation": [ null, null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
5
0
null
~ ing structure and the operations conoining them into complex units (Sect.l), then the main types of ~n~ts and operations resulting from an empirical investigation on the basis of the criteria are registered (Sect.2), and finally the rules of inference, accounting for the relevant aspects of the relationship between linguistic and cognitive structures are illustrated ~Secto3). I. A system of natural language understanding may gain an advantage from using the underlying structure of sentences (which with some approaches can be identified with the level of meaning or of logical form) as one of its starting points, instead of working with word specific roles. Ar~menta for such a standpoint, which were presented in Haji-~ov~ and S~all (1980) , include the following two maln points: (a) natural language is universal, i.e. its structure makes it possible to express an unlimited n~-.ber of assertions, questions, etc° t by finite means} once its underlying (tectogrammatical) struct= ure is known, it is possible to use it ai an output language of natural language analysis in man-machine communication and thus, without any intellectual effort on the side of the user, to ensure the functioning of automatic question answering systems (or of systems of dialogues with robots, etc.)} even if many simplifications have been included into such a system, it is then known what has been simplified and it is possible to remove the simplifications whenever necessary (e.g. if the system is to be used for an-other set of tasks, including the analysis of a broader set of input texts, questions, etc.);(b) linguistic meaning is ~ystematic, so that the configurations of "deep cases" (valency), tenses~ modalito ias, number, etc. make it possible to find full~ reliable information; on the other hand, such systems as those baaed on scenarios or scripts work in most cases with rules that are valid for the unmarked cases (in a marked case e.g. lunch in a restaurant can be taken by an employee of the restaurant, who does not reserve a table, order the meals and P~7 for them ***)° To find out which of the semantic and pragmatic distinctions are reflected in the system of language ~or, in other words, to find out in what respects the underlying structure of sentences differ from their surface patterns) testable operational criteria are needed~ these criteria should help to distinguishl (i) whether two given surface -_nits are strictly synonymous (i°e. share at least one of their meanings), or not~ (ii) whether a single surface unit has more than one meaning (is ambiguous), or whether a sibgle meaning is concerned s which is vague or indistinct (cf. Zwicky and Sadock, 1975; Kasher and Gabbay, |976} Keenan, 1978) ;(iii) whether a given distributional restriction belongs to the tectogra--.atical level, or whether it is given onl~ by the cognitive content itself, i.e. by extralinguistic conditions;(iv) between a case of deletion (of a tectogra~saatical unit by surface rules) and the absence of the given unit in the underlying structure;(v) between different kinds of tectogrammatical units (e.g. inner participants of cases, and free or adverbial modifications);(vi) which tectogrammatical unit has been deleted, in case more of them can occupy the deleted position (el. the tectogrammAtical difference between the elements of the topic and those of the focus of the sentence, or more exactly, between contextually bound and non-bound elements of the meaning of the sentence).As for (i), a criterion has been elaborated that works similarly as Carnap s intensional isomorphism, but is adapted for the structure of natural language, the surface gr-mmAtical means of which also exhibit synor%vmY: He expected that Mary comes and He expectedMary to come are considered synonymous, since wl---~any lexical (and morphological) cast such two sentences correspond to a single proposition (a single truth value is assigned to any possible world).On the other hand .John talked to a girl about a problem is not considered to be synonymous with John talked about a problem to a girl, since the known (Lakoff s) examples with a specific ~ uantification do not share their truth onditions; also our simple examples differ in their tectogr~mmatical structures (having different topic-focus articulations).For points (ii), (iii) and (v) the classical criteria known from European structural linguisti~ are used, such as the diagnostic contexts~ possibility of coordination, or Keenan s (1978) criterion of the necessary knowledge of the speaker whether s/he uses an ambiguous item in this or that of its meanings. It should be noted that perhaps each of the criteria has its weak points (often the implications work in one direction only, xn some cases not only surface features, but also the tectogrammatical character of the context has to be taken into account, etc.).Point (iv) can be systematically tested by means of the so-called dialogue test (cf. Haji~ov~ and Panevov~, in press): e.g. in John came the direction (rather than the~ point or the time point) has been deleted, so that the speaker necessarily knows where John came and can answer such a question (though s/he may not know from where of when John came).With respect to point (vi) the question test or the tests concerning negation can be used~ as far as the topic--focus articulation is concerned; thus e.g. in John sent a letter to his SISTER the verb as well as the Objective are ambiguous, since the sentence can (in different contexts) answer e.g. such questions as What did John do? (only John being include~'in the topic of the answer, all the rest belonging to its focus), W~a% did John send where? (also the verb belonging to the topic of the answe@ What did John do with the letters? (a letter rather than the verb being included in the topic), etc.; the criterion shows that John belongs to the topic in all readin-g~-of the sentence (since John is contained in all relevant question, if such improbable or secondary pairs are excluded as our sentence answering the questien What happened?without John referring to one of the most activ--~d elements of the stock of shared knowledge at the given time point), and that his sister belongs to the focus (not occurring in any relevant question).2. The framework resulting from an application of the criteria characterized in Sect. I can be briefly outlined as follows:The elementary units of the underlying structure are of three kinds: (a) lexical elements (semantic features); in the present paper we do not deal with operations or relations concerning the combining of features into more or less complex lexical meanings;(b) elementary gramatical meanings (grammatemes), which can be classified as values belonging to various categories or parameters (delimitation, number, tense, aspect, different kinds of modalities, etc.);(c) syntactic elements (functors) such as Actor, Addressee, Instrument, Directional, etc.The underlyin~ structure of a sentence can be concexved of as a network (which can be linearized, see Pl~tek, Sgall and Sgall, in press) the nodes and edges of which are labelled. A label of a node consists of a lexical meaning and a combination of ~rammatemes from different categories (the set of relevant categories is determined by the word class of the lexical meaning). A label of an edge consists in a functor, which is interpreted either as a Dependency relation, or as one o~ the relations of Coordination (corresponding to the meanings of and, or~ but, etc.) or of Apposition. The ~pendency re--6Iations are combined (in the underlying structure of a sentence without coordination or apposition) into a projective rooted tree, the nodes of which are ordered (from left to right) according to the scale of communicative dynamism, which is decisive for the topic-focus articulation of the sentence. The relations of Apposition anS Coordination are combined with those of Dependency according to certain rules described in the last quoted paper and illustrated by Fig. 1 to 3 . A simplified underlying representation of Operational amplifier is a versatile device with applications spanning signal conditioning and special s~stems design; Gemer is the functor of general relation (the kind of dependency often found between a noun and its modifications), the other symbols are self-explanatory; the grammatemes are written only if they are marked, i. Figure 2 . A simplified underlying representation of Jane either visits Mar~ and Tom t our famil~! and Mothert, or she sta~s at home. If interjectional sentences, vocative sentences and pseudosentences consisting onl~ in a noun phrase ere not discussed, then it can be stated that the root of every tree of the mentioned kind is labelled by a symbol the lexical part of which belongs to the word class of verbs. The kinds (and to a certain part also the order) of the dependency edges going from a node to those dependent on it are determined by the valency frame of the governing word (included in the lexical entry of the given lexical meaning). The kind of dependency relation are specified in two respects,which are relevant for their combinatorial properties: (a) they are classed either as (inner) participants, namely Actor (i.e. Actor/Bearer, or Tesni-~re °% premier actant rather than Fillmore s Agentive), Objective, Addressee, Origin and Effect, or as (free) modifications, i.e. Instrument, Manner, Locative, several kinds of Directional and Temporal modifications, Cause, Condition (real and irreal), etc.; (b) they are either obliatory, or optional. Every participant hich occurs only with some governing words, and at most once as dependent on the same token of the governing word) is included in the valency frames of all words on which it can depend; the free modifications are the same for all words belonging to the same word class (on the level of underlying structures), so that they can be listed once for all; only those modifications that are obligatory with a given lexical unit are quoted in its frame.Two specific cases are important for the empirical investigations: (i) a dependent word present in the underlying structure but deleted in the surface should be distinguished from the absence of the ~ iven element on the underlying structure;ii) with the inner participants it is also necessary to distinguish between the absence of an (optional) participant and a general participant of the fiven kind (this does not concern only the general Actor, typicall~ expressed by ~ne in English, but also the Objective, c-~. Haji~o-v~ and Panevov~, in press).
null
null
are relatively close to the surface structure of sentences. This is connected with the advantages granted by the universal character of natural language (ensuring that the framework is nottoo narrow and can be generalized if applied to a larger class of texts, etc.). On the other hand t with such a framework it is necessary to use a model of natural language inferencing, if we want the procedure of language understanding to go beyond purely linguistic relationships. If e.g. in a question-answering system based on such a framework not only such answers should be identified that were literally present in the input text, but also those yielded by simple (mostly unconscious) inferencing normally carried out by the reader of the text, then rules of inference can be added. A first tentative set of such rules is being checked in the experiments with the system prepared on the basis of the method TIBAQ in Prague. These rules range from general ones to more or less idiosyncratic cases concerning the relationships between specific words, as well as modalities, hypo~ym~, etc.A rather general rule changes e. ~. a structure of the form (V-act(NAct.r).g.) into (V-act(DActor(Nlnstr) ...) , v where V-act is a verb of action, D is a dummy (for the general actor) and N is an inanimate noun; thus The negative feed-ba6k can servo the volta6e to zero is changed into One can serve the voltage to zero by ....A rather specific rule connected with a single verb is that changing (use (Spatien t) (XAccomp) .... into (use (X-~) (Y .... ) ...), e.g. An op--e~ati~r~pli~e~n be used wit~-a negative feedback = With an operational ut~lifier a negative feedback can be used.er similar rules concern t~e division of conjunct clauses, the possible omission of an adjunct under certain conditions (i.e. if not being included in the topic, e.g. from "It is possible to maintain X without emplcying Y" it follows that it is possible to maintain X), or several shifts of verbal modalities, asp. a sentence having the main verb with a Possibilitive modality (can, may) is derived from a positive deca~tlv~-'6 sentence; in some cases (when the name of a device occupies the posit~n of the Actor of the main verb) also a reverse rule is available, deriving e.g. The device X is used with a ne6ative feedback from The device X can be used with anegative feedback. l~urther rules yield a conjunction or a similar connection of two statements; e.g.X is a device with the property Y and X can be applied to handle Z are combined to yield X is a device %hat has the property Y and can be applied to han-~ also explicit definitions (inclu-.g. the verb call) are identified and the inference ru--~ allow for replacements of the definiendum by the definiens and vice versa in other assertions, Besides these kinds of rules it is necessary to study (i) rules standing closer %o inference as known from logic (deriving specific statements from general ones, etc.), (ii) rules of "typical" (unmarked) consequence as given e.g. by a sc-ript~ and (iii) rules of "probable consequences", e.g. if John worked hard in the afternoon and he is tired in the evening, then the latter fact probably was caused by the former ~if no other cause was given in text). In our experiment of question answering we do not use these types of inference, but they will be useful for more general systemS.Another direction in which the system probably can be made more flexible concerns the absence of overt quantifiers and marking of their scopes in our underlying structures. One of our next aims consists in the construction of a procedure transducing the underlying structures into a mixed language, which would include means for marking quantifiers and their scopes (similarly to many formal languages of logic), while it would share all other aspects of its structure with the level of unle~lying representations of natural language.Colmerauer's Q language is used for the implementations of the main procedures of the question-answering system, so that e.g. A(B,C(D,E)) represents a tree the head of which is ~, which has two sister nodes, B, C, the latter being again expanded by D and E. The tree structure is used in our syntactico-semantic analysis of Czech (prepared by J.Panevov~ and K.Oliva) and of English (by Z.Kirschner) to represent the dependency relation between nodes. Due to the fact that Q language works only with elementary labels, the complex labels of our description have to be decomposed (i.e.the features and grammAtemes of individual work forms occupy similar positions as their daughter nodes). Also the procedures for the application of inference rules and for the identification of (full and partial or indirect) answers to a question given by the user (on the basis of the corpus of input texts that have been analyzed) are programmed in Q language. The synthesis of Czech and morphemic analysis are implemented in PL/I. For a more general system the set of inference rules should be substantially enlarged, and various heuristics, strategies and filters should be formulated in order to keep the number of derived assertions in fixed limits. For these aims the experience gained in the first experiment will be used.
null
Main paper: with this approach, the underlying structures: are relatively close to the surface structure of sentences. This is connected with the advantages granted by the universal character of natural language (ensuring that the framework is nottoo narrow and can be generalized if applied to a larger class of texts, etc.). On the other hand t with such a framework it is necessary to use a model of natural language inferencing, if we want the procedure of language understanding to go beyond purely linguistic relationships. If e.g. in a question-answering system based on such a framework not only such answers should be identified that were literally present in the input text, but also those yielded by simple (mostly unconscious) inferencing normally carried out by the reader of the text, then rules of inference can be added. A first tentative set of such rules is being checked in the experiments with the system prepared on the basis of the method TIBAQ in Prague. These rules range from general ones to more or less idiosyncratic cases concerning the relationships between specific words, as well as modalities, hypo~ym~, etc.A rather general rule changes e. ~. a structure of the form (V-act(NAct.r).g.) into (V-act(DActor(Nlnstr) ...) , v where V-act is a verb of action, D is a dummy (for the general actor) and N is an inanimate noun; thus The negative feed-ba6k can servo the volta6e to zero is changed into One can serve the voltage to zero by ....A rather specific rule connected with a single verb is that changing (use (Spatien t) (XAccomp) .... into (use (X-~) (Y .... ) ...), e.g. An op--e~ati~r~pli~e~n be used wit~-a negative feedback = With an operational ut~lifier a negative feedback can be used.er similar rules concern t~e division of conjunct clauses, the possible omission of an adjunct under certain conditions (i.e. if not being included in the topic, e.g. from "It is possible to maintain X without emplcying Y" it follows that it is possible to maintain X), or several shifts of verbal modalities, asp. a sentence having the main verb with a Possibilitive modality (can, may) is derived from a positive deca~tlv~-'6 sentence; in some cases (when the name of a device occupies the posit~n of the Actor of the main verb) also a reverse rule is available, deriving e.g. The device X is used with a ne6ative feedback from The device X can be used with anegative feedback. l~urther rules yield a conjunction or a similar connection of two statements; e.g.X is a device with the property Y and X can be applied to handle Z are combined to yield X is a device %hat has the property Y and can be applied to han-~ also explicit definitions (inclu-.g. the verb call) are identified and the inference ru--~ allow for replacements of the definiendum by the definiens and vice versa in other assertions, Besides these kinds of rules it is necessary to study (i) rules standing closer %o inference as known from logic (deriving specific statements from general ones, etc.), (ii) rules of "typical" (unmarked) consequence as given e.g. by a sc-ript~ and (iii) rules of "probable consequences", e.g. if John worked hard in the afternoon and he is tired in the evening, then the latter fact probably was caused by the former ~if no other cause was given in text). In our experiment of question answering we do not use these types of inference, but they will be useful for more general systemS.Another direction in which the system probably can be made more flexible concerns the absence of overt quantifiers and marking of their scopes in our underlying structures. One of our next aims consists in the construction of a procedure transducing the underlying structures into a mixed language, which would include means for marking quantifiers and their scopes (similarly to many formal languages of logic), while it would share all other aspects of its structure with the level of unle~lying representations of natural language.Colmerauer's Q language is used for the implementations of the main procedures of the question-answering system, so that e.g. A(B,C(D,E)) represents a tree the head of which is ~, which has two sister nodes, B, C, the latter being again expanded by D and E. The tree structure is used in our syntactico-semantic analysis of Czech (prepared by J.Panevov~ and K.Oliva) and of English (by Z.Kirschner) to represent the dependency relation between nodes. Due to the fact that Q language works only with elementary labels, the complex labels of our description have to be decomposed (i.e.the features and grammAtemes of individual work forms occupy similar positions as their daughter nodes). Also the procedures for the application of inference rules and for the identification of (full and partial or indirect) answers to a question given by the user (on the basis of the corpus of input texts that have been analyzed) are programmed in Q language. The synthesis of Czech and morphemic analysis are implemented in PL/I. For a more general system the set of inference rules should be substantially enlarged, and various heuristics, strategies and filters should be formulated in order to keep the number of derived assertions in fixed limits. For these aims the experience gained in the first experiment will be used. : ~ ing structure and the operations conoining them into complex units (Sect.l), then the main types of ~n~ts and operations resulting from an empirical investigation on the basis of the criteria are registered (Sect.2), and finally the rules of inference, accounting for the relevant aspects of the relationship between linguistic and cognitive structures are illustrated ~Secto3). I. A system of natural language understanding may gain an advantage from using the underlying structure of sentences (which with some approaches can be identified with the level of meaning or of logical form) as one of its starting points, instead of working with word specific roles. Ar~menta for such a standpoint, which were presented in Haji-~ov~ and S~all (1980) , include the following two maln points: (a) natural language is universal, i.e. its structure makes it possible to express an unlimited n~-.ber of assertions, questions, etc° t by finite means} once its underlying (tectogrammatical) struct= ure is known, it is possible to use it ai an output language of natural language analysis in man-machine communication and thus, without any intellectual effort on the side of the user, to ensure the functioning of automatic question answering systems (or of systems of dialogues with robots, etc.)} even if many simplifications have been included into such a system, it is then known what has been simplified and it is possible to remove the simplifications whenever necessary (e.g. if the system is to be used for an-other set of tasks, including the analysis of a broader set of input texts, questions, etc.);(b) linguistic meaning is ~ystematic, so that the configurations of "deep cases" (valency), tenses~ modalito ias, number, etc. make it possible to find full~ reliable information; on the other hand, such systems as those baaed on scenarios or scripts work in most cases with rules that are valid for the unmarked cases (in a marked case e.g. lunch in a restaurant can be taken by an employee of the restaurant, who does not reserve a table, order the meals and P~7 for them ***)° To find out which of the semantic and pragmatic distinctions are reflected in the system of language ~or, in other words, to find out in what respects the underlying structure of sentences differ from their surface patterns) testable operational criteria are needed~ these criteria should help to distinguishl (i) whether two given surface -_nits are strictly synonymous (i°e. share at least one of their meanings), or not~ (ii) whether a single surface unit has more than one meaning (is ambiguous), or whether a sibgle meaning is concerned s which is vague or indistinct (cf. Zwicky and Sadock, 1975; Kasher and Gabbay, |976} Keenan, 1978) ;(iii) whether a given distributional restriction belongs to the tectogra--.atical level, or whether it is given onl~ by the cognitive content itself, i.e. by extralinguistic conditions;(iv) between a case of deletion (of a tectogra~saatical unit by surface rules) and the absence of the given unit in the underlying structure;(v) between different kinds of tectogrammatical units (e.g. inner participants of cases, and free or adverbial modifications);(vi) which tectogrammatical unit has been deleted, in case more of them can occupy the deleted position (el. the tectogrammAtical difference between the elements of the topic and those of the focus of the sentence, or more exactly, between contextually bound and non-bound elements of the meaning of the sentence).As for (i), a criterion has been elaborated that works similarly as Carnap s intensional isomorphism, but is adapted for the structure of natural language, the surface gr-mmAtical means of which also exhibit synor%vmY: He expected that Mary comes and He expectedMary to come are considered synonymous, since wl---~any lexical (and morphological) cast such two sentences correspond to a single proposition (a single truth value is assigned to any possible world).On the other hand .John talked to a girl about a problem is not considered to be synonymous with John talked about a problem to a girl, since the known (Lakoff s) examples with a specific ~ uantification do not share their truth onditions; also our simple examples differ in their tectogr~mmatical structures (having different topic-focus articulations).For points (ii), (iii) and (v) the classical criteria known from European structural linguisti~ are used, such as the diagnostic contexts~ possibility of coordination, or Keenan s (1978) criterion of the necessary knowledge of the speaker whether s/he uses an ambiguous item in this or that of its meanings. It should be noted that perhaps each of the criteria has its weak points (often the implications work in one direction only, xn some cases not only surface features, but also the tectogrammatical character of the context has to be taken into account, etc.).Point (iv) can be systematically tested by means of the so-called dialogue test (cf. Haji~ov~ and Panevov~, in press): e.g. in John came the direction (rather than the~ point or the time point) has been deleted, so that the speaker necessarily knows where John came and can answer such a question (though s/he may not know from where of when John came).With respect to point (vi) the question test or the tests concerning negation can be used~ as far as the topic--focus articulation is concerned; thus e.g. in John sent a letter to his SISTER the verb as well as the Objective are ambiguous, since the sentence can (in different contexts) answer e.g. such questions as What did John do? (only John being include~'in the topic of the answer, all the rest belonging to its focus), W~a% did John send where? (also the verb belonging to the topic of the answe@ What did John do with the letters? (a letter rather than the verb being included in the topic), etc.; the criterion shows that John belongs to the topic in all readin-g~-of the sentence (since John is contained in all relevant question, if such improbable or secondary pairs are excluded as our sentence answering the questien What happened?without John referring to one of the most activ--~d elements of the stock of shared knowledge at the given time point), and that his sister belongs to the focus (not occurring in any relevant question).2. The framework resulting from an application of the criteria characterized in Sect. I can be briefly outlined as follows:The elementary units of the underlying structure are of three kinds: (a) lexical elements (semantic features); in the present paper we do not deal with operations or relations concerning the combining of features into more or less complex lexical meanings;(b) elementary gramatical meanings (grammatemes), which can be classified as values belonging to various categories or parameters (delimitation, number, tense, aspect, different kinds of modalities, etc.);(c) syntactic elements (functors) such as Actor, Addressee, Instrument, Directional, etc.The underlyin~ structure of a sentence can be concexved of as a network (which can be linearized, see Pl~tek, Sgall and Sgall, in press) the nodes and edges of which are labelled. A label of a node consists of a lexical meaning and a combination of ~rammatemes from different categories (the set of relevant categories is determined by the word class of the lexical meaning). A label of an edge consists in a functor, which is interpreted either as a Dependency relation, or as one o~ the relations of Coordination (corresponding to the meanings of and, or~ but, etc.) or of Apposition. The ~pendency re--6Iations are combined (in the underlying structure of a sentence without coordination or apposition) into a projective rooted tree, the nodes of which are ordered (from left to right) according to the scale of communicative dynamism, which is decisive for the topic-focus articulation of the sentence. The relations of Apposition anS Coordination are combined with those of Dependency according to certain rules described in the last quoted paper and illustrated by Fig. 1 to 3 . A simplified underlying representation of Operational amplifier is a versatile device with applications spanning signal conditioning and special s~stems design; Gemer is the functor of general relation (the kind of dependency often found between a noun and its modifications), the other symbols are self-explanatory; the grammatemes are written only if they are marked, i. Figure 2 . A simplified underlying representation of Jane either visits Mar~ and Tom t our famil~! and Mothert, or she sta~s at home. If interjectional sentences, vocative sentences and pseudosentences consisting onl~ in a noun phrase ere not discussed, then it can be stated that the root of every tree of the mentioned kind is labelled by a symbol the lexical part of which belongs to the word class of verbs. The kinds (and to a certain part also the order) of the dependency edges going from a node to those dependent on it are determined by the valency frame of the governing word (included in the lexical entry of the given lexical meaning). The kind of dependency relation are specified in two respects,which are relevant for their combinatorial properties: (a) they are classed either as (inner) participants, namely Actor (i.e. Actor/Bearer, or Tesni-~re °% premier actant rather than Fillmore s Agentive), Objective, Addressee, Origin and Effect, or as (free) modifications, i.e. Instrument, Manner, Locative, several kinds of Directional and Temporal modifications, Cause, Condition (real and irreal), etc.; (b) they are either obliatory, or optional. Every participant hich occurs only with some governing words, and at most once as dependent on the same token of the governing word) is included in the valency frames of all words on which it can depend; the free modifications are the same for all words belonging to the same word class (on the level of underlying structures), so that they can be listed once for all; only those modifications that are obligatory with a given lexical unit are quoted in its frame.Two specific cases are important for the empirical investigations: (i) a dependent word present in the underlying structure but deleted in the surface should be distinguished from the absence of the ~ iven element on the underlying structure;ii) with the inner participants it is also necessary to distinguish between the absence of an (optional) participant and a general participant of the fiven kind (this does not concern only the general Actor, typicall~ expressed by ~ne in English, but also the Objective, c-~. Haji~o-v~ and Panevov~, in press). Appendix:
null
null
null
null
{ "paperhash": [ "hajicová|linguistic_meaning_and_knowledge_representation_in_automatic_understanding_of_natural_language", "kasher|on_the_semantics_and_pragmatics_of_specific_and_non-specific_indefinite_expressions" ], "title": [ "Linguistic Meaning and Knowledge Representation in Automatic Understanding of Natural Language", "ON THE SEMANTICS AND PRAGMATICS OF SPECIFIC AND NON-SPECIFIC INDEFINITE EXPRESSIONS" ], "abstract": [ "The necessity of and means for distinguishing between a level of linguistic meaning and a domain of \"factual knowledge\" (or cognitive content) are argued for, supported by a survey of relevant operational criteria. The level of meaning is characterized as a safe base for computational applications, which allows for a set of inference rules accounting for the content (factual relations) of a given domain.", "Indefinite articles are involved in delicate, deep and highly interesting philosophical and linguistic distinctions. The central problem under discussion here is that of specific (and also non-specific) indefinite expressions. The first part of the present paper (ch. 2) is devoted to a critical discussion of current linguistic and logical theories. Ch. 3 summarizes the results of this discussion. Chs. 4 and 5 include our new approach, formulated both in terms of possible-worlds-andpossible-contexts-of-utterance-semantics and of Hintikka's language games. Theoretical conclusions are drawn in the following discussions." ], "authors": [ { "name": [ "E. Hajicová", "P. Sgall" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "A. Kasher", "D. Gabbay" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "5140282", "62697755" ], "intents": [ [], [] ], "isInfluential": [ false, false ] }
Problem: The paper aims to characterize the aspects of a question answering system by utilizing the underlying structure of sentences as a starting point. Solution: The hypothesis posits that a system of natural language understanding can benefit from using the underlying structure of sentences, identified with the level of meaning or logical form, instead of focusing on word-specific roles. This approach is argued to enable the expression of a wide range of assertions and questions, facilitate man-machine communication, and enhance the functioning of automatic question answering systems.
497
0
null
null
null
null
null
null
null
null
4fd3924941c68db3fcce42d77896b2e853aec002
10284738
null
L{'}idee De Grammaire Avec Le Contexte Naturel
Commonly used gralm~mrs which describe i~turai lan~uages /ex. ATN, Metamorphosis Grammars/ can be hardly applied in describing higly inflectional languages. So I propose a grammar called the grammar with natural context which takes into consideration properties of higly inflectional languages /ex. Polish / as well as structural languages /ex. English/. I introduce its normal form.
{ "name": [ "Haduch, Leszek" ], "affiliation": [ null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
9
0
null
null
null
null
null
null
Main paper: Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
497
0
null
null
null
null
null
null
null
null
11225f95613373867784b1250c80194af34f545e
6368353
null
Inquiry Semantics: A Functional Semantics of Natural Language Grammar
Programming a computer to operate to a significant degree as an author is a challenging research task. The creation of fluent multiparagraph text is a complex process because knowledge must be expressed in linguistic forms at several levels of organization, including paragraphs, sentences and words, each of which involves its own kinds of complexity. Accommodating this natural complexity is a difficult design problem. To solve it we must separate the various relevant kinds of knowledge into nearly independent collections, factoring the problem. Inquiry semantics is a new factoring of the text generation problem. It is novel in that it provides a distinct semantics for the grammar, independent of world knowledge, discourse knowledge, text plans and the lexicon, but appropriately linked to each. It has been implemented as part of the Nigel text generation grammar of English. This paper characterizes inquiry semantics, shows how it factors text generation, and describes its exemplification in NigeL The resulting description of inquiries for English has three dimensions: the varieties of operations on information, the varieties of information operated upon, and the subject matter of the operations. The definition framework for inquiries involves both traditional and nontraditional linguistic abstractions, spanning the knowledge to be represented and the plans required for presenting it.
{ "name": [ "Mann, William C." ], "affiliation": [ null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
15
12
null
Text generation is the generation of language to conform to an a priori intention and plan to communicate.The problem of text generation is naturally complex, requiring the 1previous title: Generating Text: Knowledge a Grammar Demands.This research was SUl~ported by the Air Force Office of Scientific Research contract No, F49620-79-C-0181. The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Ihe Air Force Office of Scientific Research of the U.S Government.active coordination of many kinds of knowledge having independent origins and character. A significant part of this complexity is in grammatical knowledge. It is important for the grammar of a text generator to have its own integrity, yet without being operationally autonomous. 2The methods of generating text presented here grew out of a concern to maintain the integrity and definitional independence of particular existing fragments of grammar. These methods employ the grammar in ways which do not make any strong assumptions about the nongrammatical kinds of knowledge in the text generator. They control the use of the grammar in generation.We-first describe the methods, showing how they make grammatical generation possible. Then we show how they factor the problem of text generation and clarify the role of knowledge representations. Finally we characterize inquiry semantics and the notion of meaning.People often anticipate that a text generator will plan the operations of the grammar in full detail and then execute such plans. In fact, such a mode of operation has serious difficulties, and so it is worthwhile to consider other approaches. Even given the definition of a grammar and a particular way of manipulating it to produce text, there is an issue of where the initiative should be exercised in generation. Should the responsibility for conformity of ',he result to the given intention and plan lie within Ihe grammar manipulator, i.e., be part of its process of employing the grammar, or are the details of grammar use preplanned? It is an issue of control.2This role of intention in the use of language is one of the reasons for calling the semantics in this paper a functional semantics Another is our uSe of one of the "functional" linguistic traditions To see the problem more clearly we can compare controlling the grammar to steering a car.If we intend to drive to a nearby store, we can imagine planning the trip (in terms of steering motions) in total detail, deciding just where to turn, change lanes, and so forth, with sufficient precision to insure success. This detailed plan could in principle then be used to steer the car to the store. Such methods of imposed control are practical only in very simple cases.Alternatively, we can make the decisions about steering at the point of need, on demand.Unanticipated conditions are thus allowed for, and the complexity of the task is reduced. (There is no need to compensate in the plan for tire pressures, for example.) At each significant point along the way, the driver chooses a direction that conforms to the goal of reaching the destination. This is an active conformity approach, in which decisions about direction are made while the trip is in progress.With imposed control, information about how to satisfy the intention and plan is needed before the process is started. With active conformity, information is needed as the process proceeds. Given this orientation toward choice, the problem of conformity to the text plan is simply the problem of making appropriate choices. Each set of alternatives (each "system" in its systemic representation has an associated chooser or choice expert, a process that embodies a method for choosing appropriately in any particular circumstance.The choice experts require certain information as they proceed with text generation. Nigei's choice experts request this information by presenting inquiries to the environment (the place outside of the grammar where intentions and plans to communicate are found.) For this purpose, Nigel employs a formal inquiry language in which an inquiry is an expression containing an inquiry operator and a sequence of operands. A single interface is provided for all interactions between Nigel and the environment; all interactions at the interface are in the inquiry language. This way of using such an interface is called inquiry semantics.In this framework, we can understand the demands of the grammar by understanding the inquiry operators.This section characterizes the demands for information that Nigel can make in generating sentences. Since Nigel demands information only by presenting inquiries, we first " characterize the things that Nigel can inquire about (the operands of inquiries), then characterize in two different ways the questions that Nigel can ask.Nigel has four related information forms: Term sets are collections of lexical items created in a special way which insures that they are appropriate, in denotation, :cmnotation, and information content, for their intended use. (The ,~;=cess which creates term sets does not restrict them syntactically; that is done later by the grammar.) The individual terms in a term set need not be so restrictive that they fully express the intent of the unit being constructed, since they are used with modifiers. Term sets are not like sets of synonyms since they do not have any uniformity of semantic content.Term sets are used as collections of alternatives, from which one term will be picked for the final syntactic unit. The best example is a term set giving alternatives that can serve as the head term of a nominal group.A Term is a single lexical item selected from a term set. It identifies the particular lexical item to appear in the generated text.Currently Nigel is deliberately underdeveloped in its treatment of lexical items, having no morphological component at all. Hence terms are simply lexical items which bear lexical features that the grammar can employ for selectivity.To see how these forms are used, consider the sentence:The leader is John.It refers to John twice. In generating this sentence, the same concept symbol, say JLDR, would be used to generate both ;f the references. However, two different presentation specifications for referring to JLDR would be created. The first might specify that the resulting expression should convey the fact that the individual holds the role of leader. The second could merely specify that the resulting expression should convey the person's name.Two different term sets would also be created. Initially, each would contain conceptually and denotationally appropriate terms, possibly including "leader," "man," and "person," in one cf *.he term sets, and "John," and "Mr. Jones" in the other. Under guidance from various inquiries, the grammar applies different selectivity to one term set than to the other, so that the terms "leader" and "John" are finally selected.How do these operands of inquiries compare with conventional linguistic abstractions?Concept symbols have many precedents, and terms are familiar. Both presentation specifications and term sets are new.As we will see, both presentation specifications and term sets are widely and frequently used in the grammar. Their central role in generation suggests that they are worthy of linguistic attention.Presentation specifications are novel in that they represent the content of particular units without its allocation to constituent units. This permits the investigation of how the allocation works, and in particular how differing ranks compete for representational roles. Competition among the possible consitiuents of a nominal group for representation of posession seems to be a typical case.We would like to know, for example how the decision between using the determiner "his," the prepositional phrase "of his," and the clause "which he has" is made. A presentation specification can say in a syntactically neutral way that possession is to be expressed. Using them facilitates study of the alternation.Nigel uses subtractive operations on presentation specifications to account for the fact that repeated expression of content in a nominal group is marked, but single expression is not.~.,, it can account for the perception that "his car. which he owns"is marked in a way in which "his car, which he hates" is not. Consider, for example, the word "attention" at the end of the third paragraph back. Other candidates for use in the same setting would include words such as "research." "curiosity," "work." "perusal." and "funds." These terms (as well as "attention") would all be in the term set for generating that nominal group. However, they are from different lexical fields, fields which are ordinarily not in alternation. The inquiries of the grammar can be differentiated according to categories of purposes they serve. Five such In a similar way, the mappings from concepts to term sets and from term sets to terms also vary depending on the comm,mication situation.Recurrent topics and categories of subject matter in the inquiries reflect the syntactically encoded categories Of knowledge in English. The subject matter categories form two groups:1. Elements of knowledge that typically exist odor to the intention or plan to communicate (described in section 3.3.1 below), and 2. Elements of knowledge ~:r~ated as Dad of pursuing the intention or plan to communicate (described in section 3.3.2 below.)These are called the Knowledge Base and the Text Plan, respectively. Surprisingly, we do not see any sharing of inquiries between these two kinds of knowledge. In Nigel, we find that each inquiry operator addresses solely one body of knowledge or the other. A few of the categories of operations address both kinds of knowledge, notably inquiries about availability of information.Within the categories, however, each individual inquiry is specialized to a single kind of knowledge. The organization of inquiry requires that various kinds of processes be available in the environment for responding to inquiries. At a detailed level, there must be a capability for the environment to recognize each inquiry operator and to respond to each one appropriately. In computational terms, for a particular domain of expressive problems, all of the inquiry operators which are called upon to serve that domain must be implemented. (For simple expressive problems this can be far fewer than the total for the grammar.)At a more comprehensive level, we can identify certain recurrent activities which must underlie the operations of the inquiry operator implementations. These include searching for an appropriate set of.lexical items (such as candidate head nouns for a nominal group), creating a presentation specification for expressing a particular idea, and choosing among a set of terms which the grammar has approved as appropriate for a certain use.At an even more comprehensive level, the grammar relies or; the prior activity of processes which plan the text.The following list summarizes Nigel's activity in developing a particular nominal group: "her appointment on Wednesday morning with us." The starting point is identification of a need to refer to an object represented by concept APPOINTMENT. At the end of the activity shown, there is a structure containing the word "appointment" as the head term, the word "her" as its determiner, and elements that could be further developed into the phrases "on Wednesday morning" and "with us." The category of each inquiry operator is indicated in <brackets>. Using the answers to these inquiries, the grammar builds a structure consisting of four elements in an ordered sequence: "her," "appointment," ONWEDNESDAYMORN, WITHUS. the latter two representing conceptual elements tO be further developed in subsequent applications of the grammar. Almost all of the decomposition inquiries are paired with availability inquiries in this way. However, a few are not. For these, the grammar assumes the existence and separability of the information it requests.-The following are the exception cases:The inquiry-based semantics presented here contrasts with other accounts also called "semantics" in many ways, but it does not particularly compete with them. This semantics, as a way of theorizing, is an answer to the question "How can we characterize the circumstances under which it is appropriate to make each particular grammatical choice of a language?"It differs from other semantic approaches in that We associate meanings with qrammal; qa feature~, in part because these are the controlling entities in the systemic framework. Given a systemic grammar, the syntactic structures ~',nicn are produced depend entirely on the grammatical features which are chosen, and the opportunity to choose a grammatical feature also depends entirely on the grammatical features which are chosen, i.e., the entry conditions of the system in which "the feature occurs. So it is convenient to associate meaning with features, and to derive meanings for any other entity by the determinate derivational methods which the systemic framework provides.To state the meaning of a grammatical feature is to state the technical circumstances under which the feature is chosen.We identify these circumstances as the set of possible collections of inquiry responses which are sufficient to lead to the choice of the feature. The definitions of the systems of the grammar and their choice experts are thus sufficient to determine the meaning of every grammatical feature. 45 Ambiguity of a feature arises when there is more than one collection of relevant inquiry responses which leads to the choice of the feature.Differences of meaning reflect differences between collections of inquiry responses. In Nigel, for the features Singular and Plural, one of the collections of inquiry responses which leads 4We do not stats the method here, since that involves many systemic details, but it is normally a rather straightforward matter for the Nige! grammar• More detail can be found in [Mann 82, Mann & Matthiessen 83a, Mann & Matthiessen 8, 3b] . 5The meanings of the features are not sufficient to find the sets of meanings which corres~ond to particular structures, since that requires the realization mapping of featureS to structureS. However, given the associations of features with realization operations, the structures for which a particular feature (or combination of features) is chosen can be identified, and so in principle the sets of techincal circumstances which can yield a particular string can be identified.to Singular contains a response "unitary" to MultiplicityQ, and a corresponding collection contains "multiple" as a response to MultiplicityQ, which leads to Plural. We can determine by inspection of the entire meanings that Singular and Plural exclude each other, and the determination could be made even if the features were not in direct opposition in the grammar.Notice that this approach is compatible with approaches to grammar other than traditional systemic grammar, provided that their optionality is reexpressed as alternation of features, with choice experts defined to identify the circumstances under which each option is chosen.Notice also that it is possible to have meanings in the ~irammar which are ruled out by the environment, for example, by consistency conditions.A change in the environment's epistemology could lead to changes in how the grammar is employed, without changes in meaning, the grammar being more neutral than its user.Notice also that the collection of inquiry operators for a language is a claim concerning the semantic range of the grammar of that language, a characterization of what can be exDresssd syntactically.Notice finally that, given a grammar and an inquiry semantics of each of two different languages, the question of whether a particular sentence of one language has the same meaning as a particular sentence of the other language is an addressable question, and that it is possible in principle to find cases for which the meanings are the same. One can also investigate the extent to which a particular opposition in one language is an exact translation of an opposition in another.
null
null
1. the identity of the speaker.2. the identity of the time of speaking, the "now" of tense.3. given an event to express in an independent clause, the identity of the time of occurrence of the event.4. given the need to generate a clause, the identity of the process portion (which will be realized in the main verb.)In addition, none of the mapping operators and none of the linking operators are paired. We see that the decomposition operators have little intellectual content, but the other kinds all contribute significantly.Reviewing the inquiries, we can find several kinds of operations that are particularly difficult to support in explicit knowledge representations such as those currently used in AI or logic.One operator asks whether the existence of a particular entity is hvoothetical. Knowledge gained from this inquiry is useful in controlling contrasts such as the following:If they run to town, they will be sorry.If they are running to town, they will be sorry.Another operator asks about conjectural existence. It controls contrasts such as:They will run to town.They might run to town.In the first case the running to town is treated as definite but occurring in the future.Another asks whether an action to be expressed is habitual recurrent rather than a particular instance. Another group of inquiries seeks to determine the manner of performance of an action. Others deal with partial specifications and "question variables" of the sort that are often realized by "wh" terms such as "what," "how," and "whether." Some operators control negation and quantification, which often cause representation problems.In addition to all of these potential problem sources, associated with inquiries whose responses will be difficult to determine, there are also many difficulties which do not arise from In this section we compare inquiry semantics to other kinds of semantics, and also identify the nature of meaning in this framework.
The inquiry language as a level of abstraction provides a useful factoring of the text generation problem, isolating the grammar-intensive part.Development of inquiry language has led to the creation of new kinds of abstract elements that can be the operands of i;~quiries. Of these, presentation specifications and term sets have sufficiently novel scopes to suggest that they may be useful in defining relationships between grammar and language use.We have identified three dimensions of characterization that yield a convenient abstract structure for understanding inquiry language collectively (by categories of operands, categories of operators and categories of subject matter.) These categorizations clarify the ways in which effective use of a grammar depends on processes and information outside of the grammar, including some ways which are not well controlled in available knowledge representations.Inquiry semantics contrasts with other theoretical entities I.also called "semantics" in many ways. It is potentially compatible with some other forms, but tends to be broader than many in including non-representational functions and non-declarative speech actions in its scope.
Main paper: introduction: Text generation is the generation of language to conform to an a priori intention and plan to communicate.The problem of text generation is naturally complex, requiring the 1previous title: Generating Text: Knowledge a Grammar Demands.This research was SUl~ported by the Air Force Office of Scientific Research contract No, F49620-79-C-0181. The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Ihe Air Force Office of Scientific Research of the U.S Government.active coordination of many kinds of knowledge having independent origins and character. A significant part of this complexity is in grammatical knowledge. It is important for the grammar of a text generator to have its own integrity, yet without being operationally autonomous. 2The methods of generating text presented here grew out of a concern to maintain the integrity and definitional independence of particular existing fragments of grammar. These methods employ the grammar in ways which do not make any strong assumptions about the nongrammatical kinds of knowledge in the text generator. They control the use of the grammar in generation.We-first describe the methods, showing how they make grammatical generation possible. Then we show how they factor the problem of text generation and clarify the role of knowledge representations. Finally we characterize inquiry semantics and the notion of meaning. grammar and control: People often anticipate that a text generator will plan the operations of the grammar in full detail and then execute such plans. In fact, such a mode of operation has serious difficulties, and so it is worthwhile to consider other approaches. Even given the definition of a grammar and a particular way of manipulating it to produce text, there is an issue of where the initiative should be exercised in generation. Should the responsibility for conformity of ',he result to the given intention and plan lie within Ihe grammar manipulator, i.e., be part of its process of employing the grammar, or are the details of grammar use preplanned? It is an issue of control.2This role of intention in the use of language is one of the reasons for calling the semantics in this paper a functional semantics Another is our uSe of one of the "functional" linguistic traditions To see the problem more clearly we can compare controlling the grammar to steering a car.If we intend to drive to a nearby store, we can imagine planning the trip (in terms of steering motions) in total detail, deciding just where to turn, change lanes, and so forth, with sufficient precision to insure success. This detailed plan could in principle then be used to steer the car to the store. Such methods of imposed control are practical only in very simple cases.Alternatively, we can make the decisions about steering at the point of need, on demand.Unanticipated conditions are thus allowed for, and the complexity of the task is reduced. (There is no need to compensate in the plan for tire pressures, for example.) At each significant point along the way, the driver chooses a direction that conforms to the goal of reaching the destination. This is an active conformity approach, in which decisions about direction are made while the trip is in progress.With imposed control, information about how to satisfy the intention and plan is needed before the process is started. With active conformity, information is needed as the process proceeds. Given this orientation toward choice, the problem of conformity to the text plan is simply the problem of making appropriate choices. Each set of alternatives (each "system" in its systemic representation has an associated chooser or choice expert, a process that embodies a method for choosing appropriately in any particular circumstance.The choice experts require certain information as they proceed with text generation. Nigei's choice experts request this information by presenting inquiries to the environment (the place outside of the grammar where intentions and plans to communicate are found.) For this purpose, Nigel employs a formal inquiry language in which an inquiry is an expression containing an inquiry operator and a sequence of operands. A single interface is provided for all interactions between Nigel and the environment; all interactions at the interface are in the inquiry language. This way of using such an interface is called inquiry semantics.In this framework, we can understand the demands of the grammar by understanding the inquiry operators. varieties of demands: This section characterizes the demands for information that Nigel can make in generating sentences. Since Nigel demands information only by presenting inquiries, we first " characterize the things that Nigel can inquire about (the operands of inquiries), then characterize in two different ways the questions that Nigel can ask.Nigel has four related information forms: Term sets are collections of lexical items created in a special way which insures that they are appropriate, in denotation, :cmnotation, and information content, for their intended use. (The ,~;=cess which creates term sets does not restrict them syntactically; that is done later by the grammar.) The individual terms in a term set need not be so restrictive that they fully express the intent of the unit being constructed, since they are used with modifiers. Term sets are not like sets of synonyms since they do not have any uniformity of semantic content.Term sets are used as collections of alternatives, from which one term will be picked for the final syntactic unit. The best example is a term set giving alternatives that can serve as the head term of a nominal group.A Term is a single lexical item selected from a term set. It identifies the particular lexical item to appear in the generated text.Currently Nigel is deliberately underdeveloped in its treatment of lexical items, having no morphological component at all. Hence terms are simply lexical items which bear lexical features that the grammar can employ for selectivity.To see how these forms are used, consider the sentence:The leader is John.It refers to John twice. In generating this sentence, the same concept symbol, say JLDR, would be used to generate both ;f the references. However, two different presentation specifications for referring to JLDR would be created. The first might specify that the resulting expression should convey the fact that the individual holds the role of leader. The second could merely specify that the resulting expression should convey the person's name.Two different term sets would also be created. Initially, each would contain conceptually and denotationally appropriate terms, possibly including "leader," "man," and "person," in one cf *.he term sets, and "John," and "Mr. Jones" in the other. Under guidance from various inquiries, the grammar applies different selectivity to one term set than to the other, so that the terms "leader" and "John" are finally selected.How do these operands of inquiries compare with conventional linguistic abstractions?Concept symbols have many precedents, and terms are familiar. Both presentation specifications and term sets are new.As we will see, both presentation specifications and term sets are widely and frequently used in the grammar. Their central role in generation suggests that they are worthy of linguistic attention.Presentation specifications are novel in that they represent the content of particular units without its allocation to constituent units. This permits the investigation of how the allocation works, and in particular how differing ranks compete for representational roles. Competition among the possible consitiuents of a nominal group for representation of posession seems to be a typical case.We would like to know, for example how the decision between using the determiner "his," the prepositional phrase "of his," and the clause "which he has" is made. A presentation specification can say in a syntactically neutral way that possession is to be expressed. Using them facilitates study of the alternation.Nigel uses subtractive operations on presentation specifications to account for the fact that repeated expression of content in a nominal group is marked, but single expression is not.~.,, it can account for the perception that "his car. which he owns"is marked in a way in which "his car, which he hates" is not. Consider, for example, the word "attention" at the end of the third paragraph back. Other candidates for use in the same setting would include words such as "research." "curiosity," "work." "perusal." and "funds." These terms (as well as "attention") would all be in the term set for generating that nominal group. However, they are from different lexical fields, fields which are ordinarily not in alternation. The inquiries of the grammar can be differentiated according to categories of purposes they serve. Five such In a similar way, the mappings from concepts to term sets and from term sets to terms also vary depending on the comm,mication situation.Recurrent topics and categories of subject matter in the inquiries reflect the syntactically encoded categories Of knowledge in English. The subject matter categories form two groups:1. Elements of knowledge that typically exist odor to the intention or plan to communicate (described in section 3.3.1 below), and 2. Elements of knowledge ~:r~ated as Dad of pursuing the intention or plan to communicate (described in section 3.3.2 below.)These are called the Knowledge Base and the Text Plan, respectively. Surprisingly, we do not see any sharing of inquiries between these two kinds of knowledge. In Nigel, we find that each inquiry operator addresses solely one body of knowledge or the other. A few of the categories of operations address both kinds of knowledge, notably inquiries about availability of information.Within the categories, however, each individual inquiry is specialized to a single kind of knowledge. The organization of inquiry requires that various kinds of processes be available in the environment for responding to inquiries. At a detailed level, there must be a capability for the environment to recognize each inquiry operator and to respond to each one appropriately. In computational terms, for a particular domain of expressive problems, all of the inquiry operators which are called upon to serve that domain must be implemented. (For simple expressive problems this can be far fewer than the total for the grammar.)At a more comprehensive level, we can identify certain recurrent activities which must underlie the operations of the inquiry operator implementations. These include searching for an appropriate set of.lexical items (such as candidate head nouns for a nominal group), creating a presentation specification for expressing a particular idea, and choosing among a set of terms which the grammar has approved as appropriate for a certain use.At an even more comprehensive level, the grammar relies or; the prior activity of processes which plan the text.The following list summarizes Nigel's activity in developing a particular nominal group: "her appointment on Wednesday morning with us." The starting point is identification of a need to refer to an object represented by concept APPOINTMENT. At the end of the activity shown, there is a structure containing the word "appointment" as the head term, the word "her" as its determiner, and elements that could be further developed into the phrases "on Wednesday morning" and "with us." The category of each inquiry operator is indicated in <brackets>. Using the answers to these inquiries, the grammar builds a structure consisting of four elements in an ordered sequence: "her," "appointment," ONWEDNESDAYMORN, WITHUS. the latter two representing conceptual elements tO be further developed in subsequent applications of the grammar. Almost all of the decomposition inquiries are paired with availability inquiries in this way. However, a few are not. For these, the grammar assumes the existence and separability of the information it requests.-The following are the exception cases: relations between operators: 1. the identity of the speaker.2. the identity of the time of speaking, the "now" of tense.3. given an event to express in an independent clause, the identity of the time of occurrence of the event.4. given the need to generate a clause, the identity of the process portion (which will be realized in the main verb.)In addition, none of the mapping operators and none of the linking operators are paired. We see that the decomposition operators have little intellectual content, but the other kinds all contribute significantly.Reviewing the inquiries, we can find several kinds of operations that are particularly difficult to support in explicit knowledge representations such as those currently used in AI or logic.One operator asks whether the existence of a particular entity is hvoothetical. Knowledge gained from this inquiry is useful in controlling contrasts such as the following:If they run to town, they will be sorry.If they are running to town, they will be sorry.Another operator asks about conjectural existence. It controls contrasts such as:They will run to town.They might run to town.In the first case the running to town is treated as definite but occurring in the future.Another asks whether an action to be expressed is habitual recurrent rather than a particular instance. Another group of inquiries seeks to determine the manner of performance of an action. Others deal with partial specifications and "question variables" of the sort that are often realized by "wh" terms such as "what," "how," and "whether." Some operators control negation and quantification, which often cause representation problems.In addition to all of these potential problem sources, associated with inquiries whose responses will be difficult to determine, there are also many difficulties which do not arise from In this section we compare inquiry semantics to other kinds of semantics, and also identify the nature of meaning in this framework. comparative semantics: The inquiry-based semantics presented here contrasts with other accounts also called "semantics" in many ways, but it does not particularly compete with them. This semantics, as a way of theorizing, is an answer to the question "How can we characterize the circumstances under which it is appropriate to make each particular grammatical choice of a language?"It differs from other semantic approaches in that We associate meanings with qrammal; qa feature~, in part because these are the controlling entities in the systemic framework. Given a systemic grammar, the syntactic structures ~',nicn are produced depend entirely on the grammatical features which are chosen, and the opportunity to choose a grammatical feature also depends entirely on the grammatical features which are chosen, i.e., the entry conditions of the system in which "the feature occurs. So it is convenient to associate meaning with features, and to derive meanings for any other entity by the determinate derivational methods which the systemic framework provides.To state the meaning of a grammatical feature is to state the technical circumstances under which the feature is chosen.We identify these circumstances as the set of possible collections of inquiry responses which are sufficient to lead to the choice of the feature. The definitions of the systems of the grammar and their choice experts are thus sufficient to determine the meaning of every grammatical feature. 45 Ambiguity of a feature arises when there is more than one collection of relevant inquiry responses which leads to the choice of the feature.Differences of meaning reflect differences between collections of inquiry responses. In Nigel, for the features Singular and Plural, one of the collections of inquiry responses which leads 4We do not stats the method here, since that involves many systemic details, but it is normally a rather straightforward matter for the Nige! grammar• More detail can be found in [Mann 82, Mann & Matthiessen 83a, Mann & Matthiessen 8, 3b] . 5The meanings of the features are not sufficient to find the sets of meanings which corres~ond to particular structures, since that requires the realization mapping of featureS to structureS. However, given the associations of features with realization operations, the structures for which a particular feature (or combination of features) is chosen can be identified, and so in principle the sets of techincal circumstances which can yield a particular string can be identified.to Singular contains a response "unitary" to MultiplicityQ, and a corresponding collection contains "multiple" as a response to MultiplicityQ, which leads to Plural. We can determine by inspection of the entire meanings that Singular and Plural exclude each other, and the determination could be made even if the features were not in direct opposition in the grammar.Notice that this approach is compatible with approaches to grammar other than traditional systemic grammar, provided that their optionality is reexpressed as alternation of features, with choice experts defined to identify the circumstances under which each option is chosen.Notice also that it is possible to have meanings in the ~irammar which are ruled out by the environment, for example, by consistency conditions.A change in the environment's epistemology could lead to changes in how the grammar is employed, without changes in meaning, the grammar being more neutral than its user.Notice also that the collection of inquiry operators for a language is a claim concerning the semantic range of the grammar of that language, a characterization of what can be exDresssd syntactically.Notice finally that, given a grammar and an inquiry semantics of each of two different languages, the question of whether a particular sentence of one language has the same meaning as a particular sentence of the other language is an addressable question, and that it is possible in principle to find cases for which the meanings are the same. One can also investigate the extent to which a particular opposition in one language is an exact translation of an opposition in another. conclusions: The inquiry language as a level of abstraction provides a useful factoring of the text generation problem, isolating the grammar-intensive part.Development of inquiry language has led to the creation of new kinds of abstract elements that can be the operands of i;~quiries. Of these, presentation specifications and term sets have sufficiently novel scopes to suggest that they may be useful in defining relationships between grammar and language use.We have identified three dimensions of characterization that yield a convenient abstract structure for understanding inquiry language collectively (by categories of operands, categories of operators and categories of subject matter.) These categorizations clarify the ways in which effective use of a grammar depends on processes and information outside of the grammar, including some ways which are not well controlled in available knowledge representations.Inquiry semantics contrasts with other theoretical entities I.also called "semantics" in many ways. It is potentially compatible with some other forms, but tends to be broader than many in including non-representational functions and non-declarative speech actions in its scope. Appendix:
null
null
null
null
{ "paperhash": [ "mann|an_overview_of_the_penman_text_generation_system", "mann|nigel:_a_systemic_grammar_for_text_generation.", "mann|the_anatomy_of_a_systemic_choice", "hudson|arguments_for_a_non-transformational_grammar" ], "title": [ "An Overview of the Penman Text Generation System", "Nigel: A Systemic Grammar for Text Generation.", "The Anatomy of a Systemic Choice", "Arguments for a Non-Transformational Grammar" ], "abstract": [ "The problem of programming computers to produce natural language explanations and other texts on demand is an active research area in artificial intelligence. In the past, research systems designed for this purpose have been limited by the weakness of their linguistic bases, especially their grammars, and their techniques often cannot be transferred to new knowledge domains. \n \nA new text generation system, Penman, is designed to overcome these problems and produce fluent multiparagraph text in English in response to a goal presented to the system. Penman consists of four major modules: a knowledge acquisition module which can perform domain-specific searches for knowledge relevant to a given communication goal; a text planning module which can organize the relevant information, decide what portion to present, and decide how to lead the reader's attention and knowledge through the content; a sentence generation module based on a large systemic grammar of English; and an evaluation and plan-perturbation module which revises text plans based on evaluation of text produced. \n \nDevelopment of Penman has included implementation of the largest systemic grammar of English in a single notation. A new semantic notation has been added to the systemic framework, and the semantics of nearly the entire grammar has been defined. The semantics is designed to be independent of the system's knowledge notation, so that it is usable with widely differing knowledge representations, including both frame-based and predicate-calculus-based approaches.", "Abstract : Programming a computer to write text which meets a prior need is a challenging research task. As part of such research, Nigel, a large programmed grammar of English, has been created in the framework of systemic linguistics begun by Halliday. In addition to specifying function and structures of English, Nigel has a novel semantic stratum which specifies the situations in which each grammatical feature should be used. The report consists of three papers on Nigel: an introductory overview, the script of a demonstration of its use in generation, and an exposition of how Nigel relates to the systemic framework. Although the effort to develop Nigel is significant both as computer science research and as linguistic inquiry the outlook of the report is oriented to its linguistic significance.", "This paper presents a framework for expressing how choices are made in systemic grammars. Formalizing the description of choice processes enriches descriptions of the syntax and semantics of languages, and it contributes to constructive models of language use. There are applications in education and computation. The framework represents the grammar as a combination of systemic syntactic description and explicit choice processes, called “choice experts.” Choice experts communicate across the boundary of the grammar to its environment, exploring an external intention to communicate. The environment's answers lead to choices and thereby to creation of sentences and other units, tending to satisfy the intention to communicate. The experts’ communicative framework includes an extension to the systemic notion of a function, in the direction of a more explicit semantics. Choice expert processes are presented in two notations, one informal and the other formal. The informal notation yields a grammar‐guided conver...", "For the past decade, the dominant transformational theory of syntax has produced the most interesting insights into syntactic properties. Over the same period another theory, systemic grammar, has been developed very quietly as an alternative to the transformational model. In this work Richard A. Hudson outlines \"daughter-dependency theory,\" which is derived from systemic grammar, and offers empirical reasons for preferring it to any version of transformational grammar. The goal of daughter-dependency theory is the same as that of Chomskyan transformational grammar to generate syntactic structures for all (and only) syntactically well-formed sentences that would relate to both the phonological and the semantic structures of the sentences. However, unlike transformational grammars, those based on daughter-dependency theory generate a single syntactic structure for each sentence. This structure incorporates all the kinds of information that are spread, in a transformational grammar, over to a series of structures (deep, surface, and intermediate). Instead of the combination of phrase-structure rules and transformations found in transformational grammars, daughter-dependency grammars contain rules with the following functions: classification, dependency-marking, or ordering. Hudson's strong arguments for a non-transformational grammar stress the capacity of daughter-dependency theory to reflect the facts of language structure and to capture generalizations that transformational models miss. An important attraction of Hudson's theory is that the syntax is more concrete, with no abstract underlying elements. In the appendixes, the author outlines a partial grammar for English and a small lexicon and distinguishes his theory from standard dependency theory. Hudson's provocative thesis is supported by his thorough knowledge of transformational grammar.\"" ], "authors": [ { "name": [ "W. Mann" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. Mann", "C. Matthiessen" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. Mann" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Hudson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null ], "s2_corpus_id": [ "14136713", "57089912", "9972666", "62158973" ], "intents": [ [], [], [ "background" ], [] ], "isInfluential": [ false, false, false, false ] }
Problem: The paper addresses the challenging research task of programming a computer to operate as an author, specifically focusing on the creation of fluent multiparagraph text which involves expressing knowledge in linguistic forms at various levels of organization. Solution: The paper proposes a new approach called inquiry semantics to factor the text generation problem, providing a distinct semantics for grammar independent of other knowledge types like world knowledge, discourse knowledge, text plans, and the lexicon. This approach aims to separate different kinds of knowledge into nearly independent collections to address the complexity of text generation.
497
0.024145
null
null
null
null
null
null
null
null
6a6b9a99a8b30d48e7b34ac705e86735ec6802f5
7297360
null
{WEDNESDAY}: Parsing Flexible Word Order Languages
A parser for "flexible" word order languages must be substantially data driven. In our view syntax has two distinct roles in this connection: (i) to give impulses for assembling cognitive representations, (ii) to structure the space of search for fillers. WEDNESDAY is an interpreter for a language describing the lexicon and operating on natural language sentences. The system operates from left to right, interpreting the various words comprising the sentence one at a time. The basic ideas of the approach are the following:
{ "name": [ "Stock, Oliviero and", "Castelfranchi, Cristiano and", "Parisi, Domenico" ], "affiliation": [ null, null, null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
9
1
null
A parser for "flexible" word order languages must be substantially data driven. In our view syntax has two distinct roles in this connection: (i) to give impulses for assembling cognitive representations, (ii) to structure the space of search for fillers.WEDNESDAY is an interpreter for a language describing the lexicon and operating on natural language sentences. The system operates from left to right, interpreting the various words comprising the sentence one at a time. The basic ideas of the approach are the following: a) to introduce into the lexicon linguistic knowledge that in other systems is in a centralized module. The lexicon therefore carries not only morphological data and semantic descriptions.Also syntactic knowledge is distributed throughout it, partly of a procedural kind.to build progressively a cognitive representation of the sentence in the form of a semantic network, in a global space, accessible from all levels of the analysis. c) to introduce procedures invoked by the words themselves for syntactic memory management. Simply stated, these procedures decide on the opening, closing, and mantaining of search spaces; they use detailed constraints and take into account the active expectations.WEDNESDAY is implemented in MAGMA-LISP and with a stress on the non-deterministic mechanism.Parsing typologically diverse languages emphasizes aspects that are absent or of little importance in English. By taking these problems into account, some light may be shed on: a) insufficiently treated psycholinguistic aspects b) a design which is less language-dependent c) extra-and non-grammatical aspects to be taken into consideration in designing a friendly English The work reported here has largely involved problems with parsing Italian. One of the typical features of Italian is a lower degree of word order rigidity in sentences. For instance, "Paolo ama Maria" (Paolo loves Maria) may be rewritten without any significant difference in meaning (leaving aside questions of context and pragmatics) in any the six possible permutations: Paolo ama Maria, Paolo Maria ama, Maria ama Paolo, Maria Paolo ama, ama Paolo Maria, ama Maria Paolo. Although Subject-Verb-Object is a statistically prevalent construction, all variations in word order can occur inside a component, and they may depend on the particular words which are used.In ATNSYS (Cappelli, Ferrari, Moretti, Prodanof and Stock, 1978) , a previously constructed ATN based system (Woods, 1970) , a special dynamic reordering mechanism was introduced in order to get sooner to a correct syntactic analysis, when parsing sentences of a coherent text (Ferrari and Stock, 1980 Small, 1980) is an interesting attempt to assign an active role to the lexicon. The basic aspect of parsing, according to Small's approach, is disambiguation. Words may have large numbers of different meanings. Discrimination nets inserted in words indicate the paths to be followed in the search for the appropriate meaning. Words are defined as coroutines. The control passes from one word, whose execution is temporarily suspended, to another one and so on, with reentering in a suspended word if an event occurs that can help proceeding in the suspended word's discrimination net.This approach too takes into little account syntactic constraints, and therefore implies serious problems while analyzing complex, multiple clause sentences.It is interesting tc note that, though our approach was strictly parsing oriented from the outset, there are in it many similarities with concepts developed independently in the Lexical-Functional Grammar linguistic theory (Kaplan & Bresnan, 1982) .A parser for flexible word order languages must be substantially data driven. In our view syntax has two distinct roles in this connection to give impulses for assembling cognitive representations (basically impulses to search for fillers for gaps or substitutions to be performed in the representations) -to structure the space of search of fillers. WEDNESDAY, the system presented here, is an interpreter for a language describing the lexicon and operating on natural language sentences. The system operates from left to right, interpreting the various words comprising the sentence one at a time.The diagram for WEDNESDAY is shown in Fig. 1 .The basic ideas of the approach are the following: Also syntactic knowledge is distributed throughout it, partly of a procedural kind. In other words, though the system assigns a fundamental role to syntax, it does not have a separate component called "grammar". By being for a large part bound to words, syntactic knowledge makes it possible to specify the expectations that words bring along, and in what context which conditions will have to be met by candidates to satisfy them. "Impulses", as they are called in WEDNESDAY to indicate their active role, result in connecting nodes in the sentence cognitive memory. They may admit various alternative specifications, including also side-effects such as equi-np recognition, signalling a particular required word order, etc. To manage structured spaces in this way -to maintain a syntactic control in the analysis of complex sentence to keep an emphasis on the role played by the lexicon. The following memories are used by WEDNESDAY: I) a SENTENCE COGNITIVE MEMORY in which semantic material carried by the words is continuously added and assembled. This memory can be accessed at any stage of the parsing.COGNITIVE I ~...dO FIY 1 L .......2) a STRUCTURED SYNTACTIC MEMORY in which, at every computational level:-the expectations defining the syntactic space are activated (e.g. the expectation of a verb with a certain tense for a space S) the expectations of fillers to be merged with the gap nodes are activated -the nodes capable of playing the role of fillers are memorized there are various local and contextual indications.
null
null
null
is an impulse to merge an explicitly indicated node with another node that must satisfy certain constraints, under certain conditions. MERGE is therefore the basic network assembling resource. We use to characterize the node quoted in a MERGE impulse as a "gap" node, a node that actually is merged with a gap node as a "filler" node. c) the indication of the values of the features that must not be in contrast with the corresponding features of the filler (i.e. an unspecified value of the feature in the filler is ok, a different value from the one specified is bad). If the value of the feature in the filler is NIL, the value specified here will be assumed. d) a markvalue that must not be contrasted by the markvalue of the filler e) sideffects caused by the merging of the nodes. These can be: SETFLAG, which raises a specified flag (that can subsequently alter the result of a test), REMFLAG, which removes a flag, and SUBSUBJ, which specifies the instantiation node and the ordinal number of the relative argument identifying a node. The subject of the subordinate clause (whose MAIN node will be actually filling the gap resulting from the present MERGE) will be implicitly merged into the node specified in SUBSUBJ. It should be noted that the latter may also be a gap node, in which case also after the present operation it will maintain that characteristic.MARK is an impulse to stick a markvalue onto a node. If the chosen node has already a markvalue, the new one will be forced in and will replace it.MUST indicates that the current space will not be closed if the gap is not filled. Not all gaps have a MUST: in fact in the resulting network there is an indication of which nodes remain gaps.As mentioned before, the merging of two nodes is generally an act under non-deterministic control: a non-deterministic point is established and the first attempt consists in making the proposed merging. Another attempt will consist in simply not performing that merging. A FIRST specification results in not establishing a nondeterministic point and simply merging the gap with the first acceptable filler.By and large the internal structure of gaps may be explained as follows.A gap has some information bound to it. More information is bound to subgaps, which are LISP atoms generated by interpreting the specification of alternatives within a MERGE impulse. When an "interesting event" occurs those subgaps are awakened which "find the event promising".if one of the subgaps actually finds that a node can be merged with its "father" gap and that action is performed, the state of the memories is changed in the following way:in the SENTENCE COGNITIVE MEMORY the merging results in substitution of the node and of inverse pointers.-in the STRUCTURED SYNTACTIC MEMORY the gap entity is eliminated, together with the whole set of its subgaps.Furthermore if the filler was found in a headlist, it will be removed from there.that while the action in the SENTENCE COGNITIVE MEMORY is performed immediately, the action in the STRUCTURED SYNTACTIC MEMORY may occur later.One further significant aspect is that with the arrival of the MAIN all nodes present in headlists must be merged. If this does not happen the present attempt will abort.WEDNESDAY is implemented in MAGMA-LISP and with a stress on the non-deterministic mechanism.Another version will be developed on a Lisp Machine.WEDNESDAY can analyze fairly complex, ambiguous sentences yielding the alternative interpretations.As an example consider the following Zen-like sentence, that has a number of different interpretations in Italian: Ii saggio orientale dice allo studente di parlare taeendo WEDNESDAY gives all (and only) the correct interpretations, two of which are displayed in Fig.3a and Fig.3b (in English words, more or less: "the eastern treatise advices the student to talk without words" and "the oriental wisemen silently informs the student that he (the wiseman) is talking").P-BE-SILENT X00OO175 C0000180: P-GER EOOOO178 C0000183 E0000178: P-TALK X0OOO175 COOOO174: P-STUDENT XOOOO175 COO00165: P-ADVISE XOO00076 EOOOO178 XOOOO175 C0000119: P-EASTERN XOOOOO76 COOO0075: P-TREATISE XOOOO076
Main paper: impulses can be of two types. a merge: is an impulse to merge an explicitly indicated node with another node that must satisfy certain constraints, under certain conditions. MERGE is therefore the basic network assembling resource. We use to characterize the node quoted in a MERGE impulse as a "gap" node, a node that actually is merged with a gap node as a "filler" node. c) the indication of the values of the features that must not be in contrast with the corresponding features of the filler (i.e. an unspecified value of the feature in the filler is ok, a different value from the one specified is bad). If the value of the feature in the filler is NIL, the value specified here will be assumed. d) a markvalue that must not be contrasted by the markvalue of the filler e) sideffects caused by the merging of the nodes. These can be: SETFLAG, which raises a specified flag (that can subsequently alter the result of a test), REMFLAG, which removes a flag, and SUBSUBJ, which specifies the instantiation node and the ordinal number of the relative argument identifying a node. The subject of the subordinate clause (whose MAIN node will be actually filling the gap resulting from the present MERGE) will be implicitly merged into the node specified in SUBSUBJ. It should be noted that the latter may also be a gap node, in which case also after the present operation it will maintain that characteristic.MARK is an impulse to stick a markvalue onto a node. If the chosen node has already a markvalue, the new one will be forced in and will replace it.MUST indicates that the current space will not be closed if the gap is not filled. Not all gaps have a MUST: in fact in the resulting network there is an indication of which nodes remain gaps.As mentioned before, the merging of two nodes is generally an act under non-deterministic control: a non-deterministic point is established and the first attempt consists in making the proposed merging. Another attempt will consist in simply not performing that merging. A FIRST specification results in not establishing a nondeterministic point and simply merging the gap with the first acceptable filler.By and large the internal structure of gaps may be explained as follows.A gap has some information bound to it. More information is bound to subgaps, which are LISP atoms generated by interpreting the specification of alternatives within a MERGE impulse. When an "interesting event" occurs those subgaps are awakened which "find the event promising".if one of the subgaps actually finds that a node can be merged with its "father" gap and that action is performed, the state of the memories is changed in the following way:in the SENTENCE COGNITIVE MEMORY the merging results in substitution of the node and of inverse pointers.-in the STRUCTURED SYNTACTIC MEMORY the gap entity is eliminated, together with the whole set of its subgaps.Furthermore if the filler was found in a headlist, it will be removed from there.that while the action in the SENTENCE COGNITIVE MEMORY is performed immediately, the action in the STRUCTURED SYNTACTIC MEMORY may occur later.One further significant aspect is that with the arrival of the MAIN all nodes present in headlists must be merged. If this does not happen the present attempt will abort.WEDNESDAY is implemented in MAGMA-LISP and with a stress on the non-deterministic mechanism.Another version will be developed on a Lisp Machine.WEDNESDAY can analyze fairly complex, ambiguous sentences yielding the alternative interpretations.As an example consider the following Zen-like sentence, that has a number of different interpretations in Italian: Ii saggio orientale dice allo studente di parlare taeendo WEDNESDAY gives all (and only) the correct interpretations, two of which are displayed in Fig.3a and Fig.3b (in English words, more or less: "the eastern treatise advices the student to talk without words" and "the oriental wisemen silently informs the student that he (the wiseman) is talking").P-BE-SILENT X00OO175 C0000180: P-GER EOOOO178 C0000183 E0000178: P-TALK X0OOO175 COOOO174: P-STUDENT XOOOO175 COO00165: P-ADVISE XOO00076 EOOOO178 XOOOO175 C0000119: P-EASTERN XOOOOO76 COOO0075: P-TREATISE XOOOO076 : A parser for "flexible" word order languages must be substantially data driven. In our view syntax has two distinct roles in this connection: (i) to give impulses for assembling cognitive representations, (ii) to structure the space of search for fillers.WEDNESDAY is an interpreter for a language describing the lexicon and operating on natural language sentences. The system operates from left to right, interpreting the various words comprising the sentence one at a time. The basic ideas of the approach are the following: a) to introduce into the lexicon linguistic knowledge that in other systems is in a centralized module. The lexicon therefore carries not only morphological data and semantic descriptions.Also syntactic knowledge is distributed throughout it, partly of a procedural kind.to build progressively a cognitive representation of the sentence in the form of a semantic network, in a global space, accessible from all levels of the analysis. c) to introduce procedures invoked by the words themselves for syntactic memory management. Simply stated, these procedures decide on the opening, closing, and mantaining of search spaces; they use detailed constraints and take into account the active expectations.WEDNESDAY is implemented in MAGMA-LISP and with a stress on the non-deterministic mechanism.Parsing typologically diverse languages emphasizes aspects that are absent or of little importance in English. By taking these problems into account, some light may be shed on: a) insufficiently treated psycholinguistic aspects b) a design which is less language-dependent c) extra-and non-grammatical aspects to be taken into consideration in designing a friendly English The work reported here has largely involved problems with parsing Italian. One of the typical features of Italian is a lower degree of word order rigidity in sentences. For instance, "Paolo ama Maria" (Paolo loves Maria) may be rewritten without any significant difference in meaning (leaving aside questions of context and pragmatics) in any the six possible permutations: Paolo ama Maria, Paolo Maria ama, Maria ama Paolo, Maria Paolo ama, ama Paolo Maria, ama Maria Paolo. Although Subject-Verb-Object is a statistically prevalent construction, all variations in word order can occur inside a component, and they may depend on the particular words which are used.In ATNSYS (Cappelli, Ferrari, Moretti, Prodanof and Stock, 1978) , a previously constructed ATN based system (Woods, 1970) , a special dynamic reordering mechanism was introduced in order to get sooner to a correct syntactic analysis, when parsing sentences of a coherent text (Ferrari and Stock, 1980 Small, 1980) is an interesting attempt to assign an active role to the lexicon. The basic aspect of parsing, according to Small's approach, is disambiguation. Words may have large numbers of different meanings. Discrimination nets inserted in words indicate the paths to be followed in the search for the appropriate meaning. Words are defined as coroutines. The control passes from one word, whose execution is temporarily suspended, to another one and so on, with reentering in a suspended word if an event occurs that can help proceeding in the suspended word's discrimination net.This approach too takes into little account syntactic constraints, and therefore implies serious problems while analyzing complex, multiple clause sentences.It is interesting tc note that, though our approach was strictly parsing oriented from the outset, there are in it many similarities with concepts developed independently in the Lexical-Functional Grammar linguistic theory (Kaplan & Bresnan, 1982) .A parser for flexible word order languages must be substantially data driven. In our view syntax has two distinct roles in this connection to give impulses for assembling cognitive representations (basically impulses to search for fillers for gaps or substitutions to be performed in the representations) -to structure the space of search of fillers. WEDNESDAY, the system presented here, is an interpreter for a language describing the lexicon and operating on natural language sentences. The system operates from left to right, interpreting the various words comprising the sentence one at a time.The diagram for WEDNESDAY is shown in Fig. 1 .The basic ideas of the approach are the following: Also syntactic knowledge is distributed throughout it, partly of a procedural kind. In other words, though the system assigns a fundamental role to syntax, it does not have a separate component called "grammar". By being for a large part bound to words, syntactic knowledge makes it possible to specify the expectations that words bring along, and in what context which conditions will have to be met by candidates to satisfy them. "Impulses", as they are called in WEDNESDAY to indicate their active role, result in connecting nodes in the sentence cognitive memory. They may admit various alternative specifications, including also side-effects such as equi-np recognition, signalling a particular required word order, etc. To manage structured spaces in this way -to maintain a syntactic control in the analysis of complex sentence to keep an emphasis on the role played by the lexicon. The following memories are used by WEDNESDAY: I) a SENTENCE COGNITIVE MEMORY in which semantic material carried by the words is continuously added and assembled. This memory can be accessed at any stage of the parsing.COGNITIVE I ~...dO FIY 1 L .......2) a STRUCTURED SYNTACTIC MEMORY in which, at every computational level:-the expectations defining the syntactic space are activated (e.g. the expectation of a verb with a certain tense for a space S) the expectations of fillers to be merged with the gap nodes are activated -the nodes capable of playing the role of fillers are memorized there are various local and contextual indications. Appendix:
null
null
null
null
{ "paperhash": [ "kwasny|relaxation_techniques_for_parsing_grammatically_ill-formed_input_in_natural_language_understanding_systems", "ferrari|strategy_selection_for_an_atn_syntactic_parser", "hayes|flexible_parsing", "weischedel|responding_intelligently_to_unparsable_inputs", "riesbeck|comprehension_by_computer_:_expectation-based_analysis_of_sentences_in_context" ], "title": [ "Relaxation Techniques for Parsing Grammatically Ill-Formed Input in Natural Language Understanding Systems", "Strategy Selection for an ATN Syntactic Parser", "Flexible Parsing", "Responding Intelligently to Unparsable Inputs", "Comprehension by computer : expectation-based analysis of sentences in context" ], "abstract": [ "This paper investigates several language phenomena either considered deviant by linguistic standards or insufficiently addressed by existing approaches. These include co-occurrence violations, some forms of ellipsis and extraneous forms, and conjunction. Relaxation techniques for their treatment in Natural Language Understanding Systems are discussed. These techniques, developed within the Augmented Transition Network (ATN) model, are shown to be adequate to handle many of these cases.", "I. It is impossible to measure the merits of a grammar, seen as the component of an analyser, in absolute terms. An \"ad hoc\" grammar, constructed for a limited set of sentences is, w i t h o u t d o u b t , more efficient in dealing with those particular sentences than a zrammer constructed for a larger set. Therefore, t h e first rudimentary criterion, when evaluating the relation~hlp between a grammar and a set of sentences, should be to establish whether this grammar is c a p a b l e of analysing these sentences. This is the determination of linguistic coverage, and necessitates the definition of the linguistic phenomena, independently of the linguistic theory which has been adopted to recognise these phenomena.", "When people use natural language in natural settings, they often use it ungrammatically, missing out or repeating words, breaking-off and restarting, speaking in fragments, etc., Their human listeners are usually able to cope with these deviations with little difficulty. If a computer system wishes to accept natural language input from its users on a routine basis, it must display a similar indifference. In this paper, we outline a set of parsing flexibilities that such a system should provide. We go on to describe FlexP. a bottom-up pattern-matching parser that we have designed and implemented to provide these flexibilities for restricted natural language input to a limited-domain computer system.", "All natural language systems are likely to receive inputs for which they are unprepared. The system must be able to respond to such inputs by explicitly indicating the reasons the input could not be understood, so that the user will have precise information for trying to rephrase the input. If natural language communication to data bases, to expert consultant systems, or to any other practical system is to be accepted by other than computer personnel, this is an absolute necessity.This paper presents several ideas for dealing with parts of this broad problem. One is the use of presupposition to detect user assumptions. The second is relaxation of tests while parsing. The third is a general technique for responding intelligently when no parse can be found. All of these ideas have been implemented and tested in one of two natural language systems. Some of the ideas are heuristics that might be employed by humans; others are engineering solutions for the problem of practical natural language systems.", "Abstract : ELI (English Language Interpreter) is a natural language parsing program currently used by several story understanding systems. ELI differs from most other parsers in that it: produces meaning representations (using Schank's Conceptual Dependency system) rather than syntactic structures; uses syntactic information only when the meaning can not be obtained directly; talks to other programs that make high level inferences that tie individual events into coherent episodes; uses context-based exceptions (conceptual and syntactic) to control its parsing routines. Examples of texts that ELI has understood, and details of how it works are given." ], "authors": [ { "name": [ "S. Kwasny", "N. Sondheimer" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Ferrari", "O. Stock" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "P. Hayes", "G. Mouradian" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Weischedel", "J. Black" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "C. Riesbeck", "R. Schank" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null ], "s2_corpus_id": [ "181820", "28125035", "11007680", "18828496", "60546035" ], "intents": [ [], [ "methodology", "background" ], [], [], [ "background" ] ], "isInfluential": [ false, false, false, false, false ] }
- Problem: Parsing typologically diverse languages, such as Italian with flexible word order, presents challenges in syntactic analysis due to the lower degree of word order rigidity. - Solution: A parser for flexible word order languages must be substantially data-driven, utilizing syntax to assemble cognitive representations and structure the space of search for fillers in a non-deterministic mechanism, as exemplified by the WEDNESDAY system.
497
0.002012
null
null
null
null
null
null
null
null
85e314b75794d9b95e8880edce81d472de1823ac
10821594
null
Abstract Control Structures and the Semantics of Quantifiers
Intuitively, a Ruantifier is any word or phrase that expresses a meaning that answers one of the questions "How many?" or "How much?" Typical English examples include all, no, many, few, some but not many, all but at most a ver~ few, wherever, whoever, whoever there is, and also, it can be argued, 0nly (Keenan, 1971) , also (Cushing, 1978b), and the (Chomsky, 1977) . In this paper we review an empirically motivated analysis of such meanings (Cushing, 1976; 1982a) and draw out its computational significance. For purposes of illustration, we focus our attention on the meanings expressed by the English words whatever and some, commonly represented, respectively, by the symbols "~" and "3", but most of what we say will generalize to the other meanings of this class.
{ "name": [ "Cushing, Steven" ], "affiliation": [ null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
30
3
null
In Section I, we review the notion of satisfaction in a model, through which logical formulas are customarily imbued implicitly with meaning.In Section 2, we discuss quantifier relativizatlon, a notion that becomes important for meanings other than ~ and 3.In Section 3, we use these two notions to characterize quantifier meanings as structured functions of a certain sort. In Section 4, we discuss the computational significance of that analysis.In Section 5, we elaborate on this significance by outlining a notion of abstract control structure that the analysis instantiates.Given a semantic representation language L containing predicate constants and individual constants and variables, an interpretation ~ of L is a triple <D, R, (~}>, where D is a set of individuals, the domain of ~; R is~ function, the interpretation function of I, that assigns members of D to individual constant~in L and sets of lists of members of D to predicates in L, the length of a list being equal to the number of arguments in the predicate to which it corresponds; and (f} is a set of functions, the assignment functions~f ~, that assign members of D to variables in L. A model M for L is a pair <D, R>, an interpretation of L without its assignment functions.Since "a factual situation comprises a set of individuals bearing certain relations to each other," such "a situation can be represented by a relational structure <~'~i ..... ~i .... >' where D is the set of individuals in question and ~I ..... ~i'"" certain relations on D," (van Fraassen, I~71, 107), i.e., in this context, sets of lists of members of D. Models thus serve intuitively to relate formulas in L to the factual situations they are intended to describe by mapping their constants into D and <--RI ..... ~i .... >" The "variable" character o~ the symbols assigned values by an f relative to those interpreted by R is reflected in the fact that a set of ~s corresponds to a fixed <D, R> to comprise an interpretation.The distinction between R and f gives us two different levels on which the satisfaction of formulas can be defined, i.e., on which formulas in L can be said to be true or false under I. First, we define satisfaction relative to an assignment of values to variables, by formulating statements like (i)-(vi) of Figure I , where "2 ~ (A) [~]" is read as f satisfies A in M or M satisfies A glven f. Given these statements, we ~an define "A--DB"~ read if A then B, as 'U(B & ~)", and we can define ~3~)", read for some x or there are x, as "~(~--~)'~". Second, we can define satisfaction by a model, by saying that M satisfies A, written "M (A)", if M ~ (A) [f] for whatever assignment functions f there are for M. Intuitively, this can be read as saying that A is true of the factual situation that is represented by the relational structure into which L is interpreted, regardless of what values are given to variables by the assignment functions of an interpretation. For some discussion of the cognitive or psychological significance of these notions, see Miller (1979a,b) and Cushing (1983) . , 1971, 108) represent the meanings expressed by sentences like (5), for which x and A are as for (2) and B = (6):EQUATION(3)EQUATIONWhatever there is is interesting.Interesting(x)(v ~)(~;A)Whatever is linguistic is interesting.(= Whatever there is that is linguistic is interesting.)(6) Linguistic(x)In general, B and ~ in (4) are lists of formulas in L, the relativization formulas and the principal formulas, respectively, of (4); both lists for (5) are of length I, and we will assume lists of that length for the rest of our discussion.Given (v) and (vi), the relativized quantification (4) is logically equivalent to the simple quantification (7), reflecting the synonymy of (5) with (8), for example, but this fact does not generalize to quantifier meanings other than V, because there are quantifiers ~ for which there is no truth-functlonal connective c for which (9) is l-~gically equivalent to (I0):(7) (v x)(B = A) (8) Whatever there is, if it is linguistic,then it is interesting.(9) (Rx_)(~;A)(IO) (R x)~B c A)For a formal proof of this important fact, see Cushing (1976; 1982a) . The relativized case must thus be considered separately from the simple one, despite its apparent superfluity in the case of ¥, which suffices for our purposes (with 3) in all other respects.QUANTIFIER MEANINGS AS STRUCTURED FUNCTIONS Statement (vi) characterizes the meaning expressed by (4) implicitly, by stating the conditions under which (4) can be said to be either true or false; in general, other "truth values" are also required for natural language (Cushing, 1982a; ), but we will not discuss those cases here. Given (vi), we can characterize the meaning expressed by (4) explicitly as a function, (Ii), that generates a truth value ~ from M, f, x, B, and A:(II) u = V(M,f,x,B,A)If we let o* be the function that maps a predicate in i to its extension relative to M, f, and ~ -i.e., the subset of D whose members make that p--re--dicate satisfied b~ M given ~ when assigned individually as values to ~ --, then we can replace the English clause on the rlght-hand side of the "iff" in (vi) with the equivalent set-theoretlc formulation (12), and thus (vi) itself with the equivalent statement (13):(12) D fl o*(M,f,x,B) c o*(M,f,x,A) (13) ~ ~ (v ~) (~;A_) [~] iff D fl o*(M,f,x,B) = o*(M,f,x,A)In other words, (4) is true if and only if the intersection of D with the extension of B is wholly contained as a subset in the extension of A. D is omitted from the right-hand side of the " ~ " in (12) for more general reasons that need not concern us here. Letting ~i' i__=0,i,2, be set variables, we can abstract away from the sets in (12) to get the relation --i.e., in this context, boolean-valued function --(14), which can be factored into more basic component set-theoretlc relations as shown in (15), in which the superscripts and subscripts indicate which argument places a relation is to be applied to, when the steps in the derivation are reversed:(14) ~0 na1£a2 c__~ (~0 n £i '~2)(, _ n 21) (~,az,a2) . . . .Finally, dropping the arguments ~i from the last llne of (15), we get the quantiflca~ional relation, 0~, expressed by V, as shown in 16:(16) 0 v : (£~, n 21)The function (ii), the meaning expressed by (4), thus consists of instances of two other functions: G*, which generates sets from models, assignments, and predicates; and D~, which generates truth values from sets; all related as in Figure 2 . Strictly speaking, the left-most instance of o* is really a different function -viz., the three-lnput function o*( , , ,true), rather than the four-input function ~*( , , , ) --, since true is a constant that must occur there, but this technicality need not worry us here. Each function in Figure 2 provides the same mapping as is provided collectively by the lower-level functions to which it is connected."Select sets", for example, is a mnemonic dummy-name for the function that consists of the three indicated instances of o*, through which these three independent instances interface with 0~. The effect of ~, in turn, is achieved by applying PV to whatever three sets are provided to it by Select-sets. Like Select-sets, p~ can also be further decomposed into subfunctions, as shown in Figure 3 , which reflects the structure of (15). The important point here is not the tree notation per s e, but the fact that a functional hierarchy is involved, of the indicated sort.Any other notation that is capable of expressing the relevant relationships would be Just as --in certain respects, more (Cushing, 1982a, Figures 10 and ii) --adequate for our purpose. For some general discussion of meanings as structured functions, see Cushing (1979a) .The two immediate subfunctions of ~ differ in one key respect, namely, in that Select-sets has nothing to do specifically with ~, but would be required in the analysis of any quantifier meaning; everything that is peculiar to ~ is encoded entirely in p~.An analysis of B, for example, can be obtained by simply replacing p~ in Figure 2 with an appropriate 0B, viz., the one in 17, in whichComp is a function that take the complement of a set --i.e., those members of D that are not in the set --, and Pair is a function that duplicates its input:2 I 6 i (17) p 3 = (#l'C°mpl' n 2,Pairl)This relation unravels to exactly the correct truth condition and satisfaction statement for relativized 3, Just as (16) does for ~.In the general case, we also have to include a third subfunction, R O, which generates a numerical parameter, as indicated in Figure 4 Select-sets --more precisely, its o* subfunctions --explicates the binding property common to all quantifier meanings, because it characterizes the extensions of predicates (via a*) by removing the relevant variable from the purview of the assignment, as can be seen clearly in statement (vi) of Figure I .The function 0~, the quantificational relation expressed by ~, explicates the predication property of quantifier meanings, by virtue (primarily) of which different quantifier meanings are distinguished. Its quantlficational relation is what a quantifier predicates; the extensions of the predicates it is applied to are what it predicates that of.The intuition that quantifiers are in some sense predicational is thus explained, even though the notion that they are "higher predicates" in a syntactic sense has long since failed the test of empirical verification. The function n o is what underlies the irreducibility property of certain quantifier meanings, by virtue of which (9)is not logically equivalent to (I0).Like 0~, n O is specifically characteristic of ~. For present purposes, we can consider it to be null in the case of and 3. The relationship of these functions to the quantifier meanings they decompose is indicated schematically in Figure 5 .It must be stressed in the strongest possible terms that the motivation for the analysis embodied in Figure 4 has absolutely nothing at all to do with computational considerations of any sort. Computational relevance need not imply linguistic or cognitive relevance, any more than mathematical relevance does, and vice versa.See Cushing (1979b) and Berwick and Weinberg (1982) for relevant argumentation.On the contrary, the analysls is motivated by a wide range of linguistic and psychological considerations that is too extensive to review here. See Cushing (1982a) for the full argument. The analysis does have computational significance, however, which follows post facto from its form and consists in the fact that functional hierarchies of exactly the sort it exemplifies can be seen to make up the computational systems that are expressed by computer programs.If we take a program like the one in Figure 6 , for example, and ask what functions --~.~., mathematical mappings with no side effects --it involves, we can answer immediately with the llst in (18):(18) (i) y = x + 2 (li) Z' =" (y + x) 2 ,2 (iii) z = z (iv) z' = (y x) 2 (v) z = -z '2 (vl) w = z -IThere is a function that gets a value for y by adding 2 to the value of x, a function that gets a value for z' by squaring the sum of the values of x and y, and so on. Closer examination reveals, however, that there is an even larger number of other functions that must be recognized as being involved in Figure 6 . First, there is the function in (19), which does appear explicitly in Figure 6 , but without an explicit output variable:(19) s = sin(y) Second,there is the boolean-valued function in (20) , which also appears in Figure 6 , but with no indication as to its functional character:(20) b = <(s,.5)More significantly, there is a set of functions that are entirely implicit in Figure 6 . Since 19 Continuing in this way, we can extract two further functions: F 2, which consists of the composition of (18vi) and F3; and FO, which consists of the composition of F 2, FI, and (181) and defines the overall function effected by the program, as shown in Figure 7 .The variables in Figure 6 are strictly numerical only for the sake of illustration.As we have Just seen, even in this case, extracting the implicit functional hierarchy expressed by the program requires the introduction of a nonnumerical --viz., boolean-valued --variable.In general, variables in a program can be taken to range over any data type at all --i.e., any kind of object to be processed --, as long as it can be provided with an appropriate implementation, and the same is therefore true, as well, of its implicit functional hierarchy.For an extensive llst of references on abstract data types, see Kaput (1980) ; for some discussion of their complementary relationship with the functional hierarchies expressed by programs, see Cushing (1978a; 1980) . The hierarchy expressed by an assembly language program, for example, might well involve variables that range over registers, locations, and the llke, and bottom-node functions that store and retrieve data, and so on, just as Figure 4 has bottom-node functions that assign extensions to predicates and form the intersections of sets. Given implementations of these latter functions, Figure 4 defines a computational system, Just as much as Figure 7 does, and so can be naturally implemented in whatever programming language those implementations are themselves formulated in.The control structure indicators --the words IF, THEN, ELSE, the semi-colons, the sequential placement on the page, and so on --in Figure 6 are ad hoc syntactic devices that really express semantic relationships of functional hierarchy, viz., those shown in Figure 7 . In general, we can identify a control structure with such a functional hierarchy.For some background discussion relevant to this notion, see Hamilton and Zeldin (1976) . A control structure can be said to be legitimate, if its interfaces are correct, !'e', if the subfunctions do effect the same mappings as the functions they purportedly decompose.Of the three structures in Figure 8 , for example, only (ii) is legitimate, because (i) and (iii) each generates a value of a as a side effect --!'~', a is generated by a subfunction, but not by the overall function --, and b in (i) appears from nowhere --!'~., as an input to a subfunction, but not as an input to the overall function, or as an output from another subfunction on the same level. In general, the variables in these structures can be interpreted as really representing lists of variables, just as "B" and "~" in (4) can be interpreted as representing lists of predicates• Of these three legitimate structures, then, only (ii) can be seen as occurring in Figure 7 . Figure 4 also contains a different structure (for Select-sets) that combines the features of (25) and (26).The important point here is that functional hierarchies comprising legitimate control structures are inherent in the systems expressed by workable programs.As such, they have proven useful both as a verification tool and as a programming tool.For some discussion of the relationship that ought to exist, ideally, between these two different modes of application, see Hamilton and Zeldin (1979) .interaction with those who have written an existing program, one can derive the abstract control structure of the system expressed by the program, make that structure legitimate, and then make the corresponding changes in the original program.In this way, subtle but substantial errors can be exposed and corrected that might not be readily revealed by more conventional debugging techniques.Conversely, given a legitimate control structure --such as the one for quantifier meanings in Figure 4 , for example --, the system it comprises can be implemented in any convenient programming language --essentially, by reversing the process through which we derived Figure 7 from Figure 6 , adapted to the relevant language.For some discussion of software that automates this process, see Cushing (19825) and Wasserman and Gutz (1982) .For a good description of the vision that motivates the development of this software --~.~., the ideal situation toward which its development is directed --, see Hamilton and Zeldln (1983) . Our present concerns are primarily theoretical and thus do not require the ultimate perfection of this or any other software.A number of interesting variants have been proposed to make this notion of control structure applicable to a wider class of programs• See Martin (1982) , for example, for an attempt to integrate it with more traditional data base notions. Harel (1979) introduces non-determlnacy, and Prade and Valna (1980) attempt to incorporate concepts from the theory of fuzzy sets and systems. Further development of the latter of these efforts would be of particular interest in our present context, in view of work done by Zadeh (1977) , for example, to explicate quantifier and other meanings in terms of fuzzy logic.
null
null
null
null
Main paper: : In Section I, we review the notion of satisfaction in a model, through which logical formulas are customarily imbued implicitly with meaning.In Section 2, we discuss quantifier relativizatlon, a notion that becomes important for meanings other than ~ and 3.In Section 3, we use these two notions to characterize quantifier meanings as structured functions of a certain sort. In Section 4, we discuss the computational significance of that analysis.In Section 5, we elaborate on this significance by outlining a notion of abstract control structure that the analysis instantiates.Given a semantic representation language L containing predicate constants and individual constants and variables, an interpretation ~ of L is a triple <D, R, (~}>, where D is a set of individuals, the domain of ~; R is~ function, the interpretation function of I, that assigns members of D to individual constant~in L and sets of lists of members of D to predicates in L, the length of a list being equal to the number of arguments in the predicate to which it corresponds; and (f} is a set of functions, the assignment functions~f ~, that assign members of D to variables in L. A model M for L is a pair <D, R>, an interpretation of L without its assignment functions.Since "a factual situation comprises a set of individuals bearing certain relations to each other," such "a situation can be represented by a relational structure <~'~i ..... ~i .... >' where D is the set of individuals in question and ~I ..... ~i'"" certain relations on D," (van Fraassen, I~71, 107), i.e., in this context, sets of lists of members of D. Models thus serve intuitively to relate formulas in L to the factual situations they are intended to describe by mapping their constants into D and <--RI ..... ~i .... >" The "variable" character o~ the symbols assigned values by an f relative to those interpreted by R is reflected in the fact that a set of ~s corresponds to a fixed <D, R> to comprise an interpretation.The distinction between R and f gives us two different levels on which the satisfaction of formulas can be defined, i.e., on which formulas in L can be said to be true or false under I. First, we define satisfaction relative to an assignment of values to variables, by formulating statements like (i)-(vi) of Figure I , where "2 ~ (A) [~]" is read as f satisfies A in M or M satisfies A glven f. Given these statements, we ~an define "A--DB"~ read if A then B, as 'U(B & ~)", and we can define ~3~)", read for some x or there are x, as "~(~--~)'~". Second, we can define satisfaction by a model, by saying that M satisfies A, written "M (A)", if M ~ (A) [f] for whatever assignment functions f there are for M. Intuitively, this can be read as saying that A is true of the factual situation that is represented by the relational structure into which L is interpreted, regardless of what values are given to variables by the assignment functions of an interpretation. For some discussion of the cognitive or psychological significance of these notions, see Miller (1979a,b) and Cushing (1983) . , 1971, 108) represent the meanings expressed by sentences like (5), for which x and A are as for (2) and B = (6):EQUATION(3)EQUATIONWhatever there is is interesting.Interesting(x)(v ~)(~;A)Whatever is linguistic is interesting.(= Whatever there is that is linguistic is interesting.)(6) Linguistic(x)In general, B and ~ in (4) are lists of formulas in L, the relativization formulas and the principal formulas, respectively, of (4); both lists for (5) are of length I, and we will assume lists of that length for the rest of our discussion.Given (v) and (vi), the relativized quantification (4) is logically equivalent to the simple quantification (7), reflecting the synonymy of (5) with (8), for example, but this fact does not generalize to quantifier meanings other than V, because there are quantifiers ~ for which there is no truth-functlonal connective c for which (9) is l-~gically equivalent to (I0):(7) (v x)(B = A) (8) Whatever there is, if it is linguistic,then it is interesting.(9) (Rx_)(~;A)(IO) (R x)~B c A)For a formal proof of this important fact, see Cushing (1976; 1982a) . The relativized case must thus be considered separately from the simple one, despite its apparent superfluity in the case of ¥, which suffices for our purposes (with 3) in all other respects.QUANTIFIER MEANINGS AS STRUCTURED FUNCTIONS Statement (vi) characterizes the meaning expressed by (4) implicitly, by stating the conditions under which (4) can be said to be either true or false; in general, other "truth values" are also required for natural language (Cushing, 1982a; ), but we will not discuss those cases here. Given (vi), we can characterize the meaning expressed by (4) explicitly as a function, (Ii), that generates a truth value ~ from M, f, x, B, and A:(II) u = V(M,f,x,B,A)If we let o* be the function that maps a predicate in i to its extension relative to M, f, and ~ -i.e., the subset of D whose members make that p--re--dicate satisfied b~ M given ~ when assigned individually as values to ~ --, then we can replace the English clause on the rlght-hand side of the "iff" in (vi) with the equivalent set-theoretlc formulation (12), and thus (vi) itself with the equivalent statement (13):(12) D fl o*(M,f,x,B) c o*(M,f,x,A) (13) ~ ~ (v ~) (~;A_) [~] iff D fl o*(M,f,x,B) = o*(M,f,x,A)In other words, (4) is true if and only if the intersection of D with the extension of B is wholly contained as a subset in the extension of A. D is omitted from the right-hand side of the " ~ " in (12) for more general reasons that need not concern us here. Letting ~i' i__=0,i,2, be set variables, we can abstract away from the sets in (12) to get the relation --i.e., in this context, boolean-valued function --(14), which can be factored into more basic component set-theoretlc relations as shown in (15), in which the superscripts and subscripts indicate which argument places a relation is to be applied to, when the steps in the derivation are reversed:(14) ~0 na1£a2 c__~ (~0 n £i '~2)(, _ n 21) (~,az,a2) . . . .Finally, dropping the arguments ~i from the last llne of (15), we get the quantiflca~ional relation, 0~, expressed by V, as shown in 16:(16) 0 v : (£~, n 21)The function (ii), the meaning expressed by (4), thus consists of instances of two other functions: G*, which generates sets from models, assignments, and predicates; and D~, which generates truth values from sets; all related as in Figure 2 . Strictly speaking, the left-most instance of o* is really a different function -viz., the three-lnput function o*( , , ,true), rather than the four-input function ~*( , , , ) --, since true is a constant that must occur there, but this technicality need not worry us here. Each function in Figure 2 provides the same mapping as is provided collectively by the lower-level functions to which it is connected."Select sets", for example, is a mnemonic dummy-name for the function that consists of the three indicated instances of o*, through which these three independent instances interface with 0~. The effect of ~, in turn, is achieved by applying PV to whatever three sets are provided to it by Select-sets. Like Select-sets, p~ can also be further decomposed into subfunctions, as shown in Figure 3 , which reflects the structure of (15). The important point here is not the tree notation per s e, but the fact that a functional hierarchy is involved, of the indicated sort.Any other notation that is capable of expressing the relevant relationships would be Just as --in certain respects, more (Cushing, 1982a, Figures 10 and ii) --adequate for our purpose. For some general discussion of meanings as structured functions, see Cushing (1979a) .The two immediate subfunctions of ~ differ in one key respect, namely, in that Select-sets has nothing to do specifically with ~, but would be required in the analysis of any quantifier meaning; everything that is peculiar to ~ is encoded entirely in p~.An analysis of B, for example, can be obtained by simply replacing p~ in Figure 2 with an appropriate 0B, viz., the one in 17, in whichComp is a function that take the complement of a set --i.e., those members of D that are not in the set --, and Pair is a function that duplicates its input:2 I 6 i (17) p 3 = (#l'C°mpl' n 2,Pairl)This relation unravels to exactly the correct truth condition and satisfaction statement for relativized 3, Just as (16) does for ~.In the general case, we also have to include a third subfunction, R O, which generates a numerical parameter, as indicated in Figure 4 Select-sets --more precisely, its o* subfunctions --explicates the binding property common to all quantifier meanings, because it characterizes the extensions of predicates (via a*) by removing the relevant variable from the purview of the assignment, as can be seen clearly in statement (vi) of Figure I .The function 0~, the quantificational relation expressed by ~, explicates the predication property of quantifier meanings, by virtue (primarily) of which different quantifier meanings are distinguished. Its quantlficational relation is what a quantifier predicates; the extensions of the predicates it is applied to are what it predicates that of.The intuition that quantifiers are in some sense predicational is thus explained, even though the notion that they are "higher predicates" in a syntactic sense has long since failed the test of empirical verification. The function n o is what underlies the irreducibility property of certain quantifier meanings, by virtue of which (9)is not logically equivalent to (I0).Like 0~, n O is specifically characteristic of ~. For present purposes, we can consider it to be null in the case of and 3. The relationship of these functions to the quantifier meanings they decompose is indicated schematically in Figure 5 .It must be stressed in the strongest possible terms that the motivation for the analysis embodied in Figure 4 has absolutely nothing at all to do with computational considerations of any sort. Computational relevance need not imply linguistic or cognitive relevance, any more than mathematical relevance does, and vice versa.See Cushing (1979b) and Berwick and Weinberg (1982) for relevant argumentation.On the contrary, the analysls is motivated by a wide range of linguistic and psychological considerations that is too extensive to review here. See Cushing (1982a) for the full argument. The analysis does have computational significance, however, which follows post facto from its form and consists in the fact that functional hierarchies of exactly the sort it exemplifies can be seen to make up the computational systems that are expressed by computer programs.If we take a program like the one in Figure 6 , for example, and ask what functions --~.~., mathematical mappings with no side effects --it involves, we can answer immediately with the llst in (18):(18) (i) y = x + 2 (li) Z' =" (y + x) 2 ,2 (iii) z = z (iv) z' = (y x) 2 (v) z = -z '2 (vl) w = z -IThere is a function that gets a value for y by adding 2 to the value of x, a function that gets a value for z' by squaring the sum of the values of x and y, and so on. Closer examination reveals, however, that there is an even larger number of other functions that must be recognized as being involved in Figure 6 . First, there is the function in (19), which does appear explicitly in Figure 6 , but without an explicit output variable:(19) s = sin(y) Second,there is the boolean-valued function in (20) , which also appears in Figure 6 , but with no indication as to its functional character:(20) b = <(s,.5)More significantly, there is a set of functions that are entirely implicit in Figure 6 . Since 19 Continuing in this way, we can extract two further functions: F 2, which consists of the composition of (18vi) and F3; and FO, which consists of the composition of F 2, FI, and (181) and defines the overall function effected by the program, as shown in Figure 7 .The variables in Figure 6 are strictly numerical only for the sake of illustration.As we have Just seen, even in this case, extracting the implicit functional hierarchy expressed by the program requires the introduction of a nonnumerical --viz., boolean-valued --variable.In general, variables in a program can be taken to range over any data type at all --i.e., any kind of object to be processed --, as long as it can be provided with an appropriate implementation, and the same is therefore true, as well, of its implicit functional hierarchy.For an extensive llst of references on abstract data types, see Kaput (1980) ; for some discussion of their complementary relationship with the functional hierarchies expressed by programs, see Cushing (1978a; 1980) . The hierarchy expressed by an assembly language program, for example, might well involve variables that range over registers, locations, and the llke, and bottom-node functions that store and retrieve data, and so on, just as Figure 4 has bottom-node functions that assign extensions to predicates and form the intersections of sets. Given implementations of these latter functions, Figure 4 defines a computational system, Just as much as Figure 7 does, and so can be naturally implemented in whatever programming language those implementations are themselves formulated in.The control structure indicators --the words IF, THEN, ELSE, the semi-colons, the sequential placement on the page, and so on --in Figure 6 are ad hoc syntactic devices that really express semantic relationships of functional hierarchy, viz., those shown in Figure 7 . In general, we can identify a control structure with such a functional hierarchy.For some background discussion relevant to this notion, see Hamilton and Zeldin (1976) . A control structure can be said to be legitimate, if its interfaces are correct, !'e', if the subfunctions do effect the same mappings as the functions they purportedly decompose.Of the three structures in Figure 8 , for example, only (ii) is legitimate, because (i) and (iii) each generates a value of a as a side effect --!'~', a is generated by a subfunction, but not by the overall function --, and b in (i) appears from nowhere --!'~., as an input to a subfunction, but not as an input to the overall function, or as an output from another subfunction on the same level. In general, the variables in these structures can be interpreted as really representing lists of variables, just as "B" and "~" in (4) can be interpreted as representing lists of predicates• Of these three legitimate structures, then, only (ii) can be seen as occurring in Figure 7 . Figure 4 also contains a different structure (for Select-sets) that combines the features of (25) and (26).The important point here is that functional hierarchies comprising legitimate control structures are inherent in the systems expressed by workable programs.As such, they have proven useful both as a verification tool and as a programming tool.For some discussion of the relationship that ought to exist, ideally, between these two different modes of application, see Hamilton and Zeldin (1979) .interaction with those who have written an existing program, one can derive the abstract control structure of the system expressed by the program, make that structure legitimate, and then make the corresponding changes in the original program.In this way, subtle but substantial errors can be exposed and corrected that might not be readily revealed by more conventional debugging techniques.Conversely, given a legitimate control structure --such as the one for quantifier meanings in Figure 4 , for example --, the system it comprises can be implemented in any convenient programming language --essentially, by reversing the process through which we derived Figure 7 from Figure 6 , adapted to the relevant language.For some discussion of software that automates this process, see Cushing (19825) and Wasserman and Gutz (1982) .For a good description of the vision that motivates the development of this software --~.~., the ideal situation toward which its development is directed --, see Hamilton and Zeldln (1983) . Our present concerns are primarily theoretical and thus do not require the ultimate perfection of this or any other software.A number of interesting variants have been proposed to make this notion of control structure applicable to a wider class of programs• See Martin (1982) , for example, for an attempt to integrate it with more traditional data base notions. Harel (1979) introduces non-determlnacy, and Prade and Valna (1980) attempt to incorporate concepts from the theory of fuzzy sets and systems. Further development of the latter of these efforts would be of particular interest in our present context, in view of work done by Zadeh (1977) , for example, to explicate quantifier and other meanings in terms of fuzzy logic. Appendix:
null
null
null
null
{ "paperhash": [ "wasserman|the_future_of_programming", "kapur|towards_a_theory_for_abstract_data_types", "prade|what_fuzzy_hos_may_mean", "zadeh|pruf_-_a_language_for_the_representation_of_meaning_in_natural_languages", "harel|and/or_programs:_a_new_approach_to_structured_programming", "hamilton|higher_order_software_-_a_methodology_for_defining_software", "fraassen|formal_semantics_and_logic" ], "title": [ "The future of programming", "TOWARDS A THEORY FOR ABSTRACT DATA TYPES", "What Fuzzy HOS May Mean", "PRUF - A Language for the Representation of Meaning in Natural Languages", "And/Or Programs: A New Approach to Structured Programming", "Higher Order Software - A Methodology for Defining Software", "Formal semantics and logic" ], "abstract": [ "The nature of programming is changing. These changes will accelerate as improved software development practices and more sophisticated development tools and environments are produced. This paper surveys the most likely changes in the programming task and in the nature of software over the short term, the medium term, and the long term.\nIn the short term, the focus is on gains in programmer productivity through improved tools and integrated development environments. In the medium term, programmers will be able to take advantage of libraries of software components and to make use of packages that generate programs automatically for certain kinds of common systems. Over the longer term, the nature of programming will change even more significantly as programmers become able to describe desired functions in a nonprocedural way, perhaps through a set of rules or formal specification languages. As these changes occur, the job of the application programmer will become increasingly analysis-oriented and software developers will be able to attack a large number of application areas which could not previously be addressed effectively.", "A rigorous framework for studying immutable data types having nondeterministic operations and operations exhibiting exceptional behavior is developed. The framework embodies the view of a data type taken in programming languages, and supports hierarchical and modular structure among data types. The central notion in this framework is the definition of a data type. An algebraic and behavioral approach for defining a data type is developed which focuses on the input-output behavior of a data type as observed through its operations. The definition of a data type abstracts from the representation structure of its values as well as from the multiple representations of the values for any representation structure. A hierarchical specification language for data types is proposed. The semantics of a specification is a set of related data types whose operations have the behavior captured by the specification. A clear distinction is made between a data type and its specification(s). The normal behavior and the exceptional behavior of the operations are specified separately. The specification language provides mechanisms to specify (i) a precondition for an operation thus stating its intended inputs, (ii) the exceptions which must be signalled by the operations, and (iii) the exceptions which the operations can optionally signal. Two properties of a specification, consistency and behavioral completeness, are defined. A consistent specification is guaranteed to specify at least one data type. A behaviorally complete specification ''completely'' specifies the observable behavior of the operations on their intended inputs. A deductive system based on first order multi-sorted predicate calculus with identity is developed for abstract data types. It embodies the general properties of data types, which are not explicitly stated in a specification. The theory of a data type, which consists of a subset of the first order properties of the data type, is constructed from its specification. The theory is used in verifying programs and designs expressed using the data type. Two properties of a specification, well definedness and completeness, are defined based on what can be proved from it using different fragments of the deductive system. The sufficient completeness property of Guttag and Horning is also formalized and related to the behavioral completeness property. The well definedness property is stronger than the consistency property, because the well definedness property not only requires that the specification specifies at least one data type, but also captures the intuition that it preserves other specifications used in it thus ensuring modular structure among specifications. The completeness property is stronger than the sufficient completeness property, since in addition to the requirement that the behavior of the observers can be deduced on any intended input by equational reasoning, it also requires that the equivalence of the observable effect of the construc", "Abstract : The intended objective of this paper is the investigation of the possible fuzzy extensions of H.O.S. methodology. After of brief recall of this methodology and a detailed presentation of the fuzzy concepts which are needed, the notion of fuzzy data type is introduced and discussed, along with its consequences for control maps. The general question of (fuzzy) reliability is then dealt with. (Author)", "PRUF--an acronym for Possibilistic Relational Universal Fuzzy--is a designation for a novel type of synthetic language which is intended to serve as a target language for the representation of meaning of expressions in a natural language.", "A simple tree-like programming/specification language is presented. The central idea is the dividing of conventional programming constructs into the two classes of and and or subgoaling, the subgoal tree itself constituting the program. Programs written in the language can, in general, be both nondeterministic and parallel. The syntax and semantics of the language are defined, a method for verifying programs written in it is described, and the practical significance of programming in the language assessed. Finally, some directions for further research are indicated.", "The key to software reliability is to design, develop, and manage software with a formalized methodology which can be used by computer scientists and applications engineers to describe and communicate interfaces between systems. These interfaces include: software to software; software to other systems; software to management; as well as discipline to discipline within the complete software development process. The formal methodology of Higher Order Software (HOS), specifically aimed toward large-scale multiprogrammed/multiprocessor systems, is dedicated to systems reliability. With six axioms as the basis, a given system and all of its interfaces is defined as if it were one complete and consistent computable system. Some of the derived theorems provide for: reconfiguration of real-time multiprogrammed processes, communication between functions, and prevention of data and timing conflicts.", "ion Ajdukiewicz, Kazimierz Aleph null Algorithm Beth duplication Euclidean reversing Alphabetic variant Alphabetical order Anderson, Alan R. Aristotle Assignment Assignment function Axiom of Choice Banks, P. Becker, Oskar Belnap, Jr., Nuel D. Beth, Evert W. Bivalence Birkhoff, Garrett Bochenski, Innocentius M. Boolean algebra Bound variable Branch" ], "authors": [ { "name": [ "A. Wasserman", "S. Gutz" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Kapur" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "H. Prade", "L. Vaina" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "L. Zadeh" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Harel" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. Hamilton", "S. Zeldin" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Van Fraassen", "C. Bastiaan" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null ], "s2_corpus_id": [ "6424188", "117003291", "58262426", "9182901", "966526", "7799553", "118786816" ], "intents": [ [ "methodology" ], [], [ "background" ], [], [], [ "background" ], [] ], "isInfluential": [ false, false, false, false, false, false, false ] }
null
497
0.006036
null
null
null
null
null
null
null
null
2ec92e957b4c2cfc1397299713a6bc0bcd802996
11543084
null
A Flexible Natural Language Parser Based on a Two-Level Representation of Syntax
In this paper we present a parser which al lows to make explicit the interconnections between syntax and semantics, to analyze the sentences in a quasi-deterministic fashion and, in many cases, to identify the roles of the various constituents even if the sentance is ill-formed. The main fea ture of the approach on which the parser is based consists in a two-level representation of the sy_n tactic knowledge: a first set of rules emits h~ potheses about the constituents of the sentence and their functional role and another set of rules verifies whether a hypothesis satisfies the con straints about the well-formedness of sentences. However, the application of the second set of rules is delayed until the semantic knowledge con firms the acceptability of the hypothesis. If the semantics reject it, a new hypothesis is obtained by applying a simple and relatively unexpensive "natural" modification; a set of these modifica tions is predefined and only when none of them is applicable a real backup is performed: in most cases this situation corresponds to a case where people would normally garden path.
{ "name": [ "Lesmo, Leonardo and", "Torasso, Pietro" ], "affiliation": [ null, null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
22
14
null
The problem of performing an accurate synta~ tic analysis of Natural Language sentences is still challenging for A.I. people working in the field of N.L. interpretation (Charniak 81, Kaplan 82) . The most relevant points which attracted at tention recently are: the need of a strong connection between synta~ tic processing and semantic interpretation in order to reduce the space of the alternative sy~ tactic analyses (Konolige 80, Sidner et al. 81, Milne 82) -the convenience of a quasi-deterministic synta~ tic analysis, in order to reduce the computation al overhead associated with a heavy use of back up (Marcus 80) -the convenience of an approach which tolerates also (partially) incorrect sentences, at least when it is possible to obtain a meaningful inter pretation (Weischedel & Black 80, Kwasny & Sond heimer 81, Hayes 81 ). The first two of these remarks guided the design and the implementation of a system devoted to the interpretation of N.L. (Italian) commands (Lesmo, Magnani & Torasso 81a and 81b ). In that system, however, as in most N.L. interpreters, the anal~ sis of the input sentence is mainly syntax-driven; for this reason, justin case the input sentence respects the constraints imposed by the syntactic knowledge it can be interpreted.The problem of analyzing ill-formed sentences has received a great deal of attention recently. However, most studies (Weischedel & Black 80, Kwasny & Sondheimer 81) are based on standard syn_ tactic analyzers (A.T.N.) which have been further ly augmented in order to take into account sen fences lacking some required constituents (elli~ sis) or where some syntactic constraints are not respected (e.g. agreement in number between the subject and the verb).There are two problems with this approach; both of them depend on the choice of having a sy~ tax based analysis. The first problem is the ne cessity of extending the grammar; of course, it is necessary, in general, to specify what is grarmuat~ cal'and what is not, but it would be useful that this specification does not interfere too heavily in the interpretation of the sentence. In fact, if all deviations would have to be accounted for in the grammar, an unforeseen structure would block the analysis, even if the sentence can be consider ed as understandable. Consider, for instance, the following sentence:Mary drove the car and John the truck (SI) The absence of the verb in the second clause can be considered an acceptable form of ellipsis and, consequently, the sentence can be interpreted cor rectly. On the othe: hand, it is very unlikely that an extension of the grammar would cover the following ungrammatical (see Winograd 83, pag.480) sentence: •The book that for John to read would be difficult is beautiful ($2) However, even if some efforts are required, this sentence can be considered as understandable. As stated above, a comprehensive system must be able to detect the ungrammaticality of $2, but this de tection should not prevent the construction of a structure to pass to the semantic analyzer. More over, it seems that a subtle grammaticality test of this kind is easier to make (and to express) on a structured representation of the sentence (e.g. a tree) than on the input sentence as such.The second problem which must be faced when an ATN . ~s extended to handle ill-formed sen tences is the one of word ordering. ATNs are po E erful formal tools able to analyze type-O lan guages; in the theory of formal languages alan guage is defined as a set of strings; for this reason ATNs must recognize Uordered sequences" of symbols (or words). Of course also the natural lan guages have fixed rules which define the admissi ble orderings of words and constituents, but, if those constraints have to be relaxed to accept illformed inputs, some extension%which are less straightforward than the ones used for handling the absence of a constituent are needed. For exam pie, the sentence Ate the apple John ($3) is ungrammatical, easily understandable, but seems to require in an ATN the extension of the S net~to allow to traverse the constituents in a different (even if syntactically wrong) order. Also in this case it seems that the construction of a struetur ed representation of the sentence could be the first step of the analysis; when it is done, the ordering constraints can easily be verified and, in case they are not respected either an alterna rive analysis is tried•or, as in the case of $3~ the sentence is passed to the Semantic analyzer and, possibly, the parser signals the presence of a syntactic error.In this paper we present a parser which al lows to make axplicit the interconnections between syntax and semantics , to analyze the sentences in a quasi-deterministic fashion and, in many cases, to identify the roles of the various constituents even if the sentence is ill-formed.The main feature of the approach on which the parser is based consists in the two-level represe~ tation of the syntactic knowledge: a first set of rules emits hypotheses about the constituents of the sentences and their functional role and an m other set of rules verifies whether a hypothesis satisfies the constraints about the well-formed hess of sentences. However, the application of the second set of rules is delayed until the semantic knowledge confirms the acceptability of the hyp~ thesis. If the semantics reject the current hyp~ thesis, an alternative one is tested: this control structure guarantees that all hypotheses which sa tisfy the weak syntactic constraints (which govern the emission of hypotheses) and the semantic con straints are tried before considering the input sentence as uninterpretable.The claim that the parser operates in a quasideterministic fashion is justified by the kind of processing that the system performs when a hyp~ thesis is rejected: in most cases a new hypothesis is obtained by applying a simple and relatively un expensive "natural" modification; a set of these modifications is predefined and only when none of them is applicable a real backup is performed: in most cases this situation corresponds to a case where people would normally garden path.The decision of paying particular attention to the problem of analyzing ill-formed sentences is motivated by the intended application of the parser. In fact it is included in a larger system, which allows the user to interact in natural lan guage with a relational data base (Siklossy, Lesmo & Torasso 83, Lesmo, Siklossy & Torasso 83) . Various systems have been developed in the last years, which act as N.L. interfaces to data bases (Harris 77, Waltz 78, Konolige 80 ) and all of them pointed out the necessity of having at disposal mechanisms for handling ill-formed inputs (mainly ellipsis).In the following some example sentences will be discussed; they refer both to the implemented system and to more general sentences. This is ju~ tified, because the linguistic coverage of the perser is wider than the one required by a data base interface, even if the data base, the seman tic knowledge and the lexicon are restricted to" a particular domain.Before describing the parser control struc ture, it is worth having a look at the final re~ resentation of the input sentence which is prod~ ced by the parser. It consists in a tree which represents the relationships existing among the constituents of the input sentence according to the "head and modifier" approach (Winograd 83, pag.73) °. An example of such a tree is reported in fig.l when a REL node is instantiated it does not con rain any ROLE slot. Whereas the other slots are "filled" when the needed piece of information is available (normally this happens when the head of the verb is scanned), the ROLE slots are d~ namically created when a given constituent is attached to the REL node (with the exception of AUX and H); -some slots are redundant, since their contents can be deduced by traversing the tree. For exam pie, the contents of the slot DEPEND and of the field SPECIAL of the ROLE slot can be obtained on the basis of the LINKUP node and of the first case of the clause respectively. They have been included for the sake of efficiency; -the sole input word of the example sentence which does not appear in a node of fig.l is the auxiliary "hanno". Auxiliaries have been consid ered as components of the verb, so that their presence is signalled only by means of an AUX role. The actual auxiliary, its tense, its num ber, etc. are deducible from the contents of the other slots of the REL node.The different types of nodes which have been defined are listed in Table i. As stated in the introduction, the system should act a~ a natural language front-end for a relational data base. The structure reported in fig.l is the basis for performing the semantic checks and for translating the sentence in a rela tional algebra expression (Date 81) which corr~ spond to the input query. As will be described in the following sections, neither the semantic checks nor the actual translation of the query are done at the end of the syntactic analysis; in fact the semantic checks are performed when a node is filled with a content word and the translation is obtained in an incremental way from the constit~ ents occurring in the tree. For instance, the s~ mantic check procedures will be triggered when the word "sesso" (sex) is encountered and the corre spending REF node is created, linked and filled to verify that the students have a sex (or, more precisely, that the sequence "studente di sesso" is acceptable).As regards the translation, it is worth n~ ricing that it does not represent the interpret~ tion of the given node, but the data base inter pretation of the whole constituent headed by that node; for this reason it is obtained by combining the translations of all depending constituents. Let us consider, for example, the node REF2. The translation associated with CONN3 is (join %s tudent (select &sex ((~sex eq m))) ($student eq ~person))The translation associated with REL2 is (select &pass ((~course eq Fisiea) (~date eq 18/1/83)))The resulting translation associated with REF2 i3(join (join %student (select &sex ((~sex eq m))) ($student eq ~person)) (select &pass (($course eq Fisica) (~date eq 18/1/83))) (~student eq ~student)) A detailed description of the way this translation is obtained is reported in (Lesmo, Siklossy, Tora h so 83). However, for the sake of clarity it is im portant to say that %student is the unary relation whose unique attribute is ~student and which co~ tains the names of all the students whose data are stored in the data base; &sex is a binary relation (attributes Sperson and ~sex) containing the sex of all the persons known to the system; finally &pass is the relation (attributes ~student, ~course, ~grade, ~date) where are stored the re suits of the tests passed by the students. The translation which have been shown are stored in the TRANSL slot of the associated nodes.The tree described in the previous section is built by means of a set of rules of the form condi tion-action. With each syntactic category a subset of these rules is associated: when an input word of the given category is encountered in the input sen tence, then the subset of rules associated with that category is activated and the conditions are evaluated. The conditions involve tests on the cur rent structure of the tree (i.e. the "status" of the analysis) and may request a one-word lookahead. If just one rule is selected (i.e. all other condi tions evaluate to false), its action part is exe cured. An action consists in the construction of new nodes, in their filling up with particular val ues (normally depending on the input word) and in their attachment to the already existing tree. In table 2 are reported as an example some of the rules of the packet associated with the category ADJECTIVE. The rules which are not reported handle the cases of predicative adjectives and adjective~ preceded by adverbs. In some of the rules a oneword lookahea~is used; it allows the parser to build the right structure in virtually all simple cases. In fact, even if the semantic knowledge source does not affect the choice of the rule, it can trigger the natural ch~l~nges, which modify the tree; these changes substitute the backup in many of the cases wher~the hypothesized syntactic struc ture does not satisfy the semantic constraints.An example of a sentence portion which otto, can be disambiguated only by inspecting the seman tic constraints is the following:... -Determiner -Noun ~ Adjective -Noun -...In this case the adjective may modify either the preceding or the following noun. Consider the sen tences $4 and $5°: Table 2 -Some of the rules associated with the sY_nn tactic category ADJECTIVE. The predicates used in the conditions are CURRENT X: TRUE if the current node is of type X. UNFILLED X: TRUE if the current node or the node above is of type X and it is not filledyet. CURFILL X: TRUE if the current node is of type X and is filled. NEXT CAT: is a lookahead function which returns TRUE if the category of the next word in the input string is CAT. The structure-building functions used in the actions are CRLINK XI X2: creates a new node of type XI and links it to a node of type X2. The node which must be used is located by moving up on the rightmost branch of the tree. FILL X VAL: a node of type X (located as in CRLINK) is filled with the value VAL (~ denotes the normalized form of the current word).PerIn general, however, it is not possible to void the use of backup. The backup mechanism is needed when more than one of the conditions of the rules associated with a particular category is matched, but this case is actually restricted to very complex (and unusual) relative clauses. More often, the backup is required when the input word is ambiguous, i.e. it belongs to more than one sy~ tactic categories. In this case all conditions a~ sociated with the different categories are evalu ated an~ in some cases more than one of them is matched. In all these cases the status of the ana lysis is saved (i.e. the current tree) together with the identifiers of the matched rules and a pointer to the input sentence.As an example of sentences in which the bac h up mechanism is used consider the sentences $6-$8; in them there is a lexical ambiguity for the word "che" (it acts as a relative pronoun in $6, as a conjunction in S7 and as an adjectival modifier in $8); moreover in $6 and S7 "pesca" is a form of the verb "pescare" (to fish) whereas in $8 it is a noun (the fishing).Di a quel ragazzo ehe pesca di andarsene ($6) (Tell that boy who is fishing to go away) Di a quel ragazzo che pesca male ($7) (Tell that boy that he is fishing badly) DI a quel ragazzo che pesca fantasticahai fatto (Tell that boy what a marvel lous fishing you have done).When a node is filled, it is supposed to be already attlched to the tree. The filling opera lion triggers some procedures associated with the type of the node which is being filled. Among them, the AGREEMENT procedures have the task of checking person, number and gender agreement between a node and its dependants. Particularly important is the agreement procedure associated with the REL node type, because it selects the REF node which can act as syntactic subject of the sentence (this suggestion may be overcome later by virtue of se mantic considerations). If the agreement con straints are violated, then the natural changes are attempted; if no restructuring of the tree is successful, then the initial status is maintained without changes and a warning message is issued.Perhaps, among the procedures triggered by the filling of a node, the one which have the most dramatic effects on the subsequent behavior of the system is the semantic check procedure. In fact, if the outcome of the semantic check procedure re ports the non-admissibility of an attachment, the parser is forced to find another alternative. This is done by first applying the natural changes and then, if all of them fail, by performing a backup. A semantic procedure refers to the semantic know ledge of the domain under consideration, which is stored in form of a two-level network (Lesmo, "iklossy & Torasso 83) ; the external level allows to perform the checks, whereas the internal level carries the information necessary to perform the translation. purpose either of specifying a subset of the class identified by the noun stored in the upper REF or to refer to a pro~ erty of a given object. An example of the first kind is "the professors of the department X" and an example of the second kind is "the sex of the professors ...". In this case the semantic proc~ dure accesses the net to reject incorrect specif! cations of the form "the sex of the department X". A quite different behavior characterizes the at tachment of a role to a verb (a REF node to a REL node via a CONN node); of course, the attachment of a new case cannot trigger a simple case check, but must take into account also all the cases at tached before. A side effect of this process is the binding of the actual cases to the cases pr~ dieted in the net; this can be useful when there are two cases which have the same marker (or which are both unmarked) to determine, by using the se lectional restrictions stored in the net, the actu al role of the filler of each case (e.g. syntactic subject or syntactic object).The completion of a constituent triggers the last set of syntactic rules; they verify whether the constituent (that is the node itself and its descendants) respects the ordering constraints. In case those constraints are violated (e.g. "belli i bambini sono" -nice the babies are) a warning mes sage is issued but the sentence is considered as interpretable.A word is due to explain the meaning of the term "complete". The constituent headed by the node n° is considered as complete when a new node i n. is attached to a node n k which is an ancestor gf ni; all constituents headed by the nodes b~ longing to the rightmost path of the tree are con sidered as complete when the system encounters the end of the sentence. The concept of "completion" of a constituent is particularly important because only when the constituent headed by the node n. is i complete the system translates the constituent by using different pieces of information gathered by thesemantic procedures and stores the translation in the TRANSL slot of the node n..
The natural changes have the purpose of re structuring the tree by moving around constituents without requiring backup. They are represented as pattern-action rules, where the pattern part is used to select the rules which can be applied, whereas the action part implements the transforma lion of the tree. The natural changes currently im plemented are of two main types: -MOVE UP (the easiest and most common): it at( Fig.4 -Example of the use of a MOVE UP natural change. The semantic procedure associated with the REL node type detects that "sesso" cannot fill any of the cases of "sostenere" (a), so that the constituent headed by "so stenere" is MOVEd UP to "studente" (b).CONNI ~ i )CONN2 °% (b)taches a constituent (i.e, a subtree) to a higher node (whose type is specified in the rule) of the current branch of the tree. -MOVE BACK: it attaches a constituent to the right most leaf of the preceding branch of the tree. For example; a MOVE UP rule is used to build the tree shown in fig.l : the relative clause "che hanno sostenuto ..." is firstly attached to the nearest REF node ("sesso") ; when the verb is found the node REL2 is filled ( fig.4a) , the agreement and semantic check procedures are triggered and this latter re turns that "sesso" cannot fill an unmarked case of "sostenere", so that the partially built relative clause is moved up to REF2 ("studente" -fig.4b); this new hypothesis is validated by the agreement and semantic procedures. An example of the'applic~ tion of a MOVE BACK rule has been given in the third section, in connection with the problem of attaching the adjectival nodes (see fig.5 ).As stated in the previous section, the natural changes do not substitute in all cases the backup mechanism; the backup is strictly connected with the concept of "garden path". PARSIFAL (Marcus 80) is able to parse sentences in a deterministic way when they are not garden paths. However it has been shown (Milne 82 ) that:-For a pair of potential garden path sentences, it is not possible to uniquely determine which is a garden path and which is not (different people may choose in different ways). -The choice of having a n-constituent lookahead (as in PARSIFAL) does not allow to decide whether a sentence is a potential garden path in a psych~ logically plausible way.The semantic knowledge plays a fundamental role in choosing a particular analysis. Milne argues that a one-word lookahead, with the substantial help of semantic information is what is needed to provide a model of N.L. which is psych~ logically sound (one-word lookahead plus semantics is also advocated in .We think that the approach adopted in our pa~ ser basically agrees with this position. In a rat~ er vague sense, the non-complete nodes of our tree correspond with the Active Node Stack, i.e. with the not yet completed constituents of the sentence. The natural changes allow to operate on these nodes on the basis of semantic information. However there is a fundamental difference: our parser has at dis posal the whole structure built previously. An e~ ample of the possibility of using non-active co~ stituents is given by the MOVE BACK natural changes where a previou$constituent (already completed) ~s used to attach a node (see REFI in fig.5 ). This greater flexibility has the disadvantage of not gi~ ing any cue for deciding a-priori what is a valid natural change and what is not (it is possible to devise natural changes for all possible kinds of restructuring of the tree); however, it allows to -choose heuristics which are in agreement with the actual behavior of humans and which fit in a simple way in the proposed model.As regards the use of backup, the cited works do not give an account of what happens in the pal set when an analysis fails due to a garden path (see, however, Marcus 80, . Our prov! sional solution is to use the backup, a computation al tool heavier than the natural changes: it should correspond to the situation when "the user must ton m sciously undo this previous choice after detect ing an inconsistency" (woods 73, pag.133). We ac knowledge the problems associated with this choice, e.g. the need of saving at some times the status of the analysis, the possibility of interference with the natural changes, etc., but the backup is used parsimoniously (due to the condition part of the syntactic rules) and, anyway, we do not believe it is the final solution to this problem.The paper describes a parser for a large sub set of Italian. The novel control structure in volves the use of natural changes which restructure the tree representing the status of the analysis without the intervention of the backup mechanism. This allows the system to operate in a pseudo-dete~ ministic way, in that the use of backup is limited to sentences which could make people garden path.Another major feature of the parser is its a bility to cope with some kinds of ill-formedness of the input sentences. This is obtained by a decomp~ sition of the syntactic knowledge into two levels: the first level contains structure building rules, whereas the second level contains rules of agree ment and rules related with the ordering of constit uents. This structuring of the syntactic knowledge allows the parser to be data driven: the scanning of a new input word produces its insertion into the analysis tree; this may be seen as an hypothesis of interpretation, which can be accepted or rejected later on the basis of other independent knowledge sources. This allows the system to avoid the use of classical rewriting rules or transition networks which represent in an integrated way all syntactic constraints.As stated in the introduction, the authors are developing a N.L. interface to a relational data base. The lexical analyzer and the access proce dures to the network representing the semantic con straints are running, the construction rules and the natural changes are being debugged, whereas the ordering rules are under development. The transla tion into the actual data base query is running. The system is written in FRANZ LISP and runs on a VAX 11/780 under the UNIX operating system.
null
null
null
Main paper: natural changes versus backup: The natural changes have the purpose of re structuring the tree by moving around constituents without requiring backup. They are represented as pattern-action rules, where the pattern part is used to select the rules which can be applied, whereas the action part implements the transforma lion of the tree. The natural changes currently im plemented are of two main types: -MOVE UP (the easiest and most common): it at( Fig.4 -Example of the use of a MOVE UP natural change. The semantic procedure associated with the REL node type detects that "sesso" cannot fill any of the cases of "sostenere" (a), so that the constituent headed by "so stenere" is MOVEd UP to "studente" (b).CONNI ~ i )CONN2 °% (b)taches a constituent (i.e, a subtree) to a higher node (whose type is specified in the rule) of the current branch of the tree. -MOVE BACK: it attaches a constituent to the right most leaf of the preceding branch of the tree. For example; a MOVE UP rule is used to build the tree shown in fig.l : the relative clause "che hanno sostenuto ..." is firstly attached to the nearest REF node ("sesso") ; when the verb is found the node REL2 is filled ( fig.4a) , the agreement and semantic check procedures are triggered and this latter re turns that "sesso" cannot fill an unmarked case of "sostenere", so that the partially built relative clause is moved up to REF2 ("studente" -fig.4b); this new hypothesis is validated by the agreement and semantic procedures. An example of the'applic~ tion of a MOVE BACK rule has been given in the third section, in connection with the problem of attaching the adjectival nodes (see fig.5 ).As stated in the previous section, the natural changes do not substitute in all cases the backup mechanism; the backup is strictly connected with the concept of "garden path". PARSIFAL (Marcus 80) is able to parse sentences in a deterministic way when they are not garden paths. However it has been shown (Milne 82 ) that:-For a pair of potential garden path sentences, it is not possible to uniquely determine which is a garden path and which is not (different people may choose in different ways). -The choice of having a n-constituent lookahead (as in PARSIFAL) does not allow to decide whether a sentence is a potential garden path in a psych~ logically plausible way.The semantic knowledge plays a fundamental role in choosing a particular analysis. Milne argues that a one-word lookahead, with the substantial help of semantic information is what is needed to provide a model of N.L. which is psych~ logically sound (one-word lookahead plus semantics is also advocated in .We think that the approach adopted in our pa~ ser basically agrees with this position. In a rat~ er vague sense, the non-complete nodes of our tree correspond with the Active Node Stack, i.e. with the not yet completed constituents of the sentence. The natural changes allow to operate on these nodes on the basis of semantic information. However there is a fundamental difference: our parser has at dis posal the whole structure built previously. An e~ ample of the possibility of using non-active co~ stituents is given by the MOVE BACK natural changes where a previou$constituent (already completed) ~s used to attach a node (see REFI in fig.5 ). This greater flexibility has the disadvantage of not gi~ ing any cue for deciding a-priori what is a valid natural change and what is not (it is possible to devise natural changes for all possible kinds of restructuring of the tree); however, it allows to -choose heuristics which are in agreement with the actual behavior of humans and which fit in a simple way in the proposed model.As regards the use of backup, the cited works do not give an account of what happens in the pal set when an analysis fails due to a garden path (see, however, Marcus 80, . Our prov! sional solution is to use the backup, a computation al tool heavier than the natural changes: it should correspond to the situation when "the user must ton m sciously undo this previous choice after detect ing an inconsistency" (woods 73, pag.133). We ac knowledge the problems associated with this choice, e.g. the need of saving at some times the status of the analysis, the possibility of interference with the natural changes, etc., but the backup is used parsimoniously (due to the condition part of the syntactic rules) and, anyway, we do not believe it is the final solution to this problem.The paper describes a parser for a large sub set of Italian. The novel control structure in volves the use of natural changes which restructure the tree representing the status of the analysis without the intervention of the backup mechanism. This allows the system to operate in a pseudo-dete~ ministic way, in that the use of backup is limited to sentences which could make people garden path.Another major feature of the parser is its a bility to cope with some kinds of ill-formedness of the input sentences. This is obtained by a decomp~ sition of the syntactic knowledge into two levels: the first level contains structure building rules, whereas the second level contains rules of agree ment and rules related with the ordering of constit uents. This structuring of the syntactic knowledge allows the parser to be data driven: the scanning of a new input word produces its insertion into the analysis tree; this may be seen as an hypothesis of interpretation, which can be accepted or rejected later on the basis of other independent knowledge sources. This allows the system to avoid the use of classical rewriting rules or transition networks which represent in an integrated way all syntactic constraints.As stated in the introduction, the authors are developing a N.L. interface to a relational data base. The lexical analyzer and the access proce dures to the network representing the semantic con straints are running, the construction rules and the natural changes are being debugged, whereas the ordering rules are under development. The transla tion into the actual data base query is running. The system is written in FRANZ LISP and runs on a VAX 11/780 under the UNIX operating system. introduction: The problem of performing an accurate synta~ tic analysis of Natural Language sentences is still challenging for A.I. people working in the field of N.L. interpretation (Charniak 81, Kaplan 82) . The most relevant points which attracted at tention recently are: the need of a strong connection between synta~ tic processing and semantic interpretation in order to reduce the space of the alternative sy~ tactic analyses (Konolige 80, Sidner et al. 81, Milne 82) -the convenience of a quasi-deterministic synta~ tic analysis, in order to reduce the computation al overhead associated with a heavy use of back up (Marcus 80) -the convenience of an approach which tolerates also (partially) incorrect sentences, at least when it is possible to obtain a meaningful inter pretation (Weischedel & Black 80, Kwasny & Sond heimer 81, Hayes 81 ). The first two of these remarks guided the design and the implementation of a system devoted to the interpretation of N.L. (Italian) commands (Lesmo, Magnani & Torasso 81a and 81b ). In that system, however, as in most N.L. interpreters, the anal~ sis of the input sentence is mainly syntax-driven; for this reason, justin case the input sentence respects the constraints imposed by the syntactic knowledge it can be interpreted.The problem of analyzing ill-formed sentences has received a great deal of attention recently. However, most studies (Weischedel & Black 80, Kwasny & Sondheimer 81) are based on standard syn_ tactic analyzers (A.T.N.) which have been further ly augmented in order to take into account sen fences lacking some required constituents (elli~ sis) or where some syntactic constraints are not respected (e.g. agreement in number between the subject and the verb).There are two problems with this approach; both of them depend on the choice of having a sy~ tax based analysis. The first problem is the ne cessity of extending the grammar; of course, it is necessary, in general, to specify what is grarmuat~ cal'and what is not, but it would be useful that this specification does not interfere too heavily in the interpretation of the sentence. In fact, if all deviations would have to be accounted for in the grammar, an unforeseen structure would block the analysis, even if the sentence can be consider ed as understandable. Consider, for instance, the following sentence:Mary drove the car and John the truck (SI) The absence of the verb in the second clause can be considered an acceptable form of ellipsis and, consequently, the sentence can be interpreted cor rectly. On the othe: hand, it is very unlikely that an extension of the grammar would cover the following ungrammatical (see Winograd 83, pag.480) sentence: •The book that for John to read would be difficult is beautiful ($2) However, even if some efforts are required, this sentence can be considered as understandable. As stated above, a comprehensive system must be able to detect the ungrammaticality of $2, but this de tection should not prevent the construction of a structure to pass to the semantic analyzer. More over, it seems that a subtle grammaticality test of this kind is easier to make (and to express) on a structured representation of the sentence (e.g. a tree) than on the input sentence as such.The second problem which must be faced when an ATN . ~s extended to handle ill-formed sen tences is the one of word ordering. ATNs are po E erful formal tools able to analyze type-O lan guages; in the theory of formal languages alan guage is defined as a set of strings; for this reason ATNs must recognize Uordered sequences" of symbols (or words). Of course also the natural lan guages have fixed rules which define the admissi ble orderings of words and constituents, but, if those constraints have to be relaxed to accept illformed inputs, some extension%which are less straightforward than the ones used for handling the absence of a constituent are needed. For exam pie, the sentence Ate the apple John ($3) is ungrammatical, easily understandable, but seems to require in an ATN the extension of the S net~to allow to traverse the constituents in a different (even if syntactically wrong) order. Also in this case it seems that the construction of a struetur ed representation of the sentence could be the first step of the analysis; when it is done, the ordering constraints can easily be verified and, in case they are not respected either an alterna rive analysis is tried•or, as in the case of $3~ the sentence is passed to the Semantic analyzer and, possibly, the parser signals the presence of a syntactic error.In this paper we present a parser which al lows to make axplicit the interconnections between syntax and semantics , to analyze the sentences in a quasi-deterministic fashion and, in many cases, to identify the roles of the various constituents even if the sentence is ill-formed.The main feature of the approach on which the parser is based consists in the two-level represe~ tation of the syntactic knowledge: a first set of rules emits hypotheses about the constituents of the sentences and their functional role and an m other set of rules verifies whether a hypothesis satisfies the constraints about the well-formed hess of sentences. However, the application of the second set of rules is delayed until the semantic knowledge confirms the acceptability of the hyp~ thesis. If the semantics reject the current hyp~ thesis, an alternative one is tested: this control structure guarantees that all hypotheses which sa tisfy the weak syntactic constraints (which govern the emission of hypotheses) and the semantic con straints are tried before considering the input sentence as uninterpretable.The claim that the parser operates in a quasideterministic fashion is justified by the kind of processing that the system performs when a hyp~ thesis is rejected: in most cases a new hypothesis is obtained by applying a simple and relatively un expensive "natural" modification; a set of these modifications is predefined and only when none of them is applicable a real backup is performed: in most cases this situation corresponds to a case where people would normally garden path.The decision of paying particular attention to the problem of analyzing ill-formed sentences is motivated by the intended application of the parser. In fact it is included in a larger system, which allows the user to interact in natural lan guage with a relational data base (Siklossy, Lesmo & Torasso 83, Lesmo, Siklossy & Torasso 83) . Various systems have been developed in the last years, which act as N.L. interfaces to data bases (Harris 77, Waltz 78, Konolige 80 ) and all of them pointed out the necessity of having at disposal mechanisms for handling ill-formed inputs (mainly ellipsis).In the following some example sentences will be discussed; they refer both to the implemented system and to more general sentences. This is ju~ tified, because the linguistic coverage of the perser is wider than the one required by a data base interface, even if the data base, the seman tic knowledge and the lexicon are restricted to" a particular domain.Before describing the parser control struc ture, it is worth having a look at the final re~ resentation of the input sentence which is prod~ ced by the parser. It consists in a tree which represents the relationships existing among the constituents of the input sentence according to the "head and modifier" approach (Winograd 83, pag.73) °. An example of such a tree is reported in fig.l when a REL node is instantiated it does not con rain any ROLE slot. Whereas the other slots are "filled" when the needed piece of information is available (normally this happens when the head of the verb is scanned), the ROLE slots are d~ namically created when a given constituent is attached to the REL node (with the exception of AUX and H); -some slots are redundant, since their contents can be deduced by traversing the tree. For exam pie, the contents of the slot DEPEND and of the field SPECIAL of the ROLE slot can be obtained on the basis of the LINKUP node and of the first case of the clause respectively. They have been included for the sake of efficiency; -the sole input word of the example sentence which does not appear in a node of fig.l is the auxiliary "hanno". Auxiliaries have been consid ered as components of the verb, so that their presence is signalled only by means of an AUX role. The actual auxiliary, its tense, its num ber, etc. are deducible from the contents of the other slots of the REL node.The different types of nodes which have been defined are listed in Table i. As stated in the introduction, the system should act a~ a natural language front-end for a relational data base. The structure reported in fig.l is the basis for performing the semantic checks and for translating the sentence in a rela tional algebra expression (Date 81) which corr~ spond to the input query. As will be described in the following sections, neither the semantic checks nor the actual translation of the query are done at the end of the syntactic analysis; in fact the semantic checks are performed when a node is filled with a content word and the translation is obtained in an incremental way from the constit~ ents occurring in the tree. For instance, the s~ mantic check procedures will be triggered when the word "sesso" (sex) is encountered and the corre spending REF node is created, linked and filled to verify that the students have a sex (or, more precisely, that the sequence "studente di sesso" is acceptable).As regards the translation, it is worth n~ ricing that it does not represent the interpret~ tion of the given node, but the data base inter pretation of the whole constituent headed by that node; for this reason it is obtained by combining the translations of all depending constituents. Let us consider, for example, the node REF2. The translation associated with CONN3 is (join %s tudent (select &sex ((~sex eq m))) ($student eq ~person))The translation associated with REL2 is (select &pass ((~course eq Fisiea) (~date eq 18/1/83)))The resulting translation associated with REF2 i3(join (join %student (select &sex ((~sex eq m))) ($student eq ~person)) (select &pass (($course eq Fisica) (~date eq 18/1/83))) (~student eq ~student)) A detailed description of the way this translation is obtained is reported in (Lesmo, Siklossy, Tora h so 83). However, for the sake of clarity it is im portant to say that %student is the unary relation whose unique attribute is ~student and which co~ tains the names of all the students whose data are stored in the data base; &sex is a binary relation (attributes Sperson and ~sex) containing the sex of all the persons known to the system; finally &pass is the relation (attributes ~student, ~course, ~grade, ~date) where are stored the re suits of the tests passed by the students. The translation which have been shown are stored in the TRANSL slot of the associated nodes.The tree described in the previous section is built by means of a set of rules of the form condi tion-action. With each syntactic category a subset of these rules is associated: when an input word of the given category is encountered in the input sen tence, then the subset of rules associated with that category is activated and the conditions are evaluated. The conditions involve tests on the cur rent structure of the tree (i.e. the "status" of the analysis) and may request a one-word lookahead. If just one rule is selected (i.e. all other condi tions evaluate to false), its action part is exe cured. An action consists in the construction of new nodes, in their filling up with particular val ues (normally depending on the input word) and in their attachment to the already existing tree. In table 2 are reported as an example some of the rules of the packet associated with the category ADJECTIVE. The rules which are not reported handle the cases of predicative adjectives and adjective~ preceded by adverbs. In some of the rules a oneword lookahea~is used; it allows the parser to build the right structure in virtually all simple cases. In fact, even if the semantic knowledge source does not affect the choice of the rule, it can trigger the natural ch~l~nges, which modify the tree; these changes substitute the backup in many of the cases wher~the hypothesized syntactic struc ture does not satisfy the semantic constraints.An example of a sentence portion which otto, can be disambiguated only by inspecting the seman tic constraints is the following:... -Determiner -Noun ~ Adjective -Noun -...In this case the adjective may modify either the preceding or the following noun. Consider the sen tences $4 and $5°: Table 2 -Some of the rules associated with the sY_nn tactic category ADJECTIVE. The predicates used in the conditions are CURRENT X: TRUE if the current node is of type X. UNFILLED X: TRUE if the current node or the node above is of type X and it is not filledyet. CURFILL X: TRUE if the current node is of type X and is filled. NEXT CAT: is a lookahead function which returns TRUE if the category of the next word in the input string is CAT. The structure-building functions used in the actions are CRLINK XI X2: creates a new node of type XI and links it to a node of type X2. The node which must be used is located by moving up on the rightmost branch of the tree. FILL X VAL: a node of type X (located as in CRLINK) is filled with the value VAL (~ denotes the normalized form of the current word).PerIn general, however, it is not possible to void the use of backup. The backup mechanism is needed when more than one of the conditions of the rules associated with a particular category is matched, but this case is actually restricted to very complex (and unusual) relative clauses. More often, the backup is required when the input word is ambiguous, i.e. it belongs to more than one sy~ tactic categories. In this case all conditions a~ sociated with the different categories are evalu ated an~ in some cases more than one of them is matched. In all these cases the status of the ana lysis is saved (i.e. the current tree) together with the identifiers of the matched rules and a pointer to the input sentence.As an example of sentences in which the bac h up mechanism is used consider the sentences $6-$8; in them there is a lexical ambiguity for the word "che" (it acts as a relative pronoun in $6, as a conjunction in S7 and as an adjectival modifier in $8); moreover in $6 and S7 "pesca" is a form of the verb "pescare" (to fish) whereas in $8 it is a noun (the fishing).Di a quel ragazzo ehe pesca di andarsene ($6) (Tell that boy who is fishing to go away) Di a quel ragazzo che pesca male ($7) (Tell that boy that he is fishing badly) DI a quel ragazzo che pesca fantasticahai fatto (Tell that boy what a marvel lous fishing you have done).When a node is filled, it is supposed to be already attlched to the tree. The filling opera lion triggers some procedures associated with the type of the node which is being filled. Among them, the AGREEMENT procedures have the task of checking person, number and gender agreement between a node and its dependants. Particularly important is the agreement procedure associated with the REL node type, because it selects the REF node which can act as syntactic subject of the sentence (this suggestion may be overcome later by virtue of se mantic considerations). If the agreement con straints are violated, then the natural changes are attempted; if no restructuring of the tree is successful, then the initial status is maintained without changes and a warning message is issued.Perhaps, among the procedures triggered by the filling of a node, the one which have the most dramatic effects on the subsequent behavior of the system is the semantic check procedure. In fact, if the outcome of the semantic check procedure re ports the non-admissibility of an attachment, the parser is forced to find another alternative. This is done by first applying the natural changes and then, if all of them fail, by performing a backup. A semantic procedure refers to the semantic know ledge of the domain under consideration, which is stored in form of a two-level network (Lesmo, "iklossy & Torasso 83) ; the external level allows to perform the checks, whereas the internal level carries the information necessary to perform the translation. purpose either of specifying a subset of the class identified by the noun stored in the upper REF or to refer to a pro~ erty of a given object. An example of the first kind is "the professors of the department X" and an example of the second kind is "the sex of the professors ...". In this case the semantic proc~ dure accesses the net to reject incorrect specif! cations of the form "the sex of the department X". A quite different behavior characterizes the at tachment of a role to a verb (a REF node to a REL node via a CONN node); of course, the attachment of a new case cannot trigger a simple case check, but must take into account also all the cases at tached before. A side effect of this process is the binding of the actual cases to the cases pr~ dieted in the net; this can be useful when there are two cases which have the same marker (or which are both unmarked) to determine, by using the se lectional restrictions stored in the net, the actu al role of the filler of each case (e.g. syntactic subject or syntactic object).The completion of a constituent triggers the last set of syntactic rules; they verify whether the constituent (that is the node itself and its descendants) respects the ordering constraints. In case those constraints are violated (e.g. "belli i bambini sono" -nice the babies are) a warning mes sage is issued but the sentence is considered as interpretable.A word is due to explain the meaning of the term "complete". The constituent headed by the node n° is considered as complete when a new node i n. is attached to a node n k which is an ancestor gf ni; all constituents headed by the nodes b~ longing to the rightmost path of the tree are con sidered as complete when the system encounters the end of the sentence. The concept of "completion" of a constituent is particularly important because only when the constituent headed by the node n. is i complete the system translates the constituent by using different pieces of information gathered by thesemantic procedures and stores the translation in the TRANSL slot of the node n.. Appendix:
null
null
null
null
{ "paperhash": [ "milne|predicting_garden_path_sentences", "lesmo|a_deterministic_analyzer_for_the_interpretation_of_natural_language_commands", "charniak|six_topics_in_search_of_a_parser:_an_overview_of_ai_language_research", "kwasny|relaxation_techniques_for_parsing_grammatically_ill-formed_input_in_natural_language_understanding_systems", "weischedel|responding_intelligently_to_unparsable_inputs", "sidner|research_in_knowledge_representation_for_natural_language_understanding", "marcus|a_theory_of_syntactic_recognition_for_natural_language", "woods|research_in_natural_language_understanding", "waltz|an_english_language_question_answering_system_for_a_large_relational_database", "bates|language_as_a_cognitive_process", "date|an_introduction_to_database_systems" ], "title": [ "Predicting Garden Path Sentences", "A Deterministic Analyzer for the Interpretation of Natural Language Commands", "Six Topics in Search of a Parser: An Overview of AI Language Research", "Relaxation Techniques for Parsing Grammatically Ill-Formed Input in Natural Language Understanding Systems", "Responding Intelligently to Unparsable Inputs", "Research in Knowledge Representation for Natural Language Understanding", "A theory of syntactic recognition for natural language", "Research in Natural Language Understanding", "An English language question answering system for a large relational database", "Language as a Cognitive Process", "An Introduction to Database Systems" ], "abstract": [ "This work is an investigation into part of the human sentence parsing mechanism (HSPM). The major test of the psychological validity of any model of the HSPM is that it fail on precisely those sentences that humans find to be garden paths. It is hypothesized that the HSPM consists of at least two processes. We call the first process the syntactic processor, and the second will be known as the semantic processor. It is hypothesized that the syntactic processor is unconscious, deterministic and fast, but limited. While most ambiguities are resolved on the basis of syntactic information, when the syntactic processor can no longer guarantee a correct analysis, semantic information is used to help resolve the ambiguity. This model leads to a better prediction and explanation of which sentences will cause people to garden path.", "This paper describes a system which translates a query in Italian language into a representation which can be immediately interpreted as a sequence of algebraic operations on a relational data base. The use of a lookshead buffer allows the system to operate deterministically. Different knowledge sources are used to cope with semantics (associated with the lexicon) and syntax (represented as pajt tern-action rules). These knowledge sources cooper ate during the query translation so that independent translation steps and intermediate representations of the command are avoided. Therefore the term \"determinism\" is used to mean that all the structures built during the process concur to build the final command representation.", "My purpose in this paper is to give an overview of natural language understanding work within artificial intelligence (AI). 1 will concentrate on the problem of parsing going from natural language input to a semantic representation. Naturally, the form of semantic representation is a factor in such discussions, so it will receive some attention as well. Furthermore. 1 doubt that parsing can be completely isolated from text processing issues, and hence I will touch upon such seemingly non-parsing issues as script application. Nevertheless, the topic is parsing", "This paper investigates several language phenomena either considered deviant by linguistic standards or insufficiently addressed by existing approaches. These include co-occurrence violations, some forms of ellipsis and extraneous forms, and conjunction. Relaxation techniques for their treatment in Natural Language Understanding Systems are discussed. These techniques, developed within the Augmented Transition Network (ATN) model, are shown to be adequate to handle many of these cases.", "All natural language systems are likely to receive inputs for which they are unprepared. The system must be able to respond to such inputs by explicitly indicating the reasons the input could not be understood, so that the user will have precise information for trying to rephrase the input. If natural language communication to data bases, to expert consultant systems, or to any other practical system is to be accepted by other than computer personnel, this is an absolute necessity.This paper presents several ideas for dealing with parts of this broad problem. One is the use of presupposition to detect user assumptions. The second is relaxation of tests while parsing. The third is a general technique for responding intelligently when no parse can be found. All of these ideas have been implemented and tested in one of two natural language systems. Some of the ideas are heuristics that might be employed by humans; others are engineering solutions for the problem of practical natural language systems.", "Abstract : This report summarizes the research of BBN's ARPA-sponsored Knowledge Representation for Natural Language Understanding project during its fourth year. In it we report on advances, both in theory and implementation, in the areas of knowledge representation, natural language understanding, and abstract parallel machines. In particular, we report on theoretical advances in the knowledge representation system KL-ONE, extensions to the KL-ONE system, and new uses of KL-ONE in the domain of knowledge about graphic displays. We report on a design for a new prototype natural language understanding system, on issues in cascaded architectures for interaction among the components of a language system, and on a module for Lexical acquisition. In addition, we examine three topics in discourse: a new model of speaker meaning, which extends our previous work on speakers' intentions, an investigation of reference planning and identification, and a theory of 'one'-anaphora interpretation. Our discussion of abstract parallel machines reports on a class of algorithms that approximate Quillian's (49) ideas on the function of human memory. (Author)", "Abstract : Assume that the syntax of natural language can be parsed by a left-to-right deterministic mechanism without facilities for parallelism or backup. It will be shown that this 'determinism' hypothesis, explored within the context of the grammar of English, leads to a simple mechanism, a grammar interpreter. (Author)", "Abstract : The goals of the project are to develop techniques required for fluent and effective communication between a decision maker and an intelligent computerized display system in the context of complex decision tasks such as military command and control. This problem is approached as a natural language understanding problem, since most of the techniques required would still be necessary for an artificial language designed specifically for the task. Characteristics that are considered important for such communication are the ability for the user to omit details that can be inferred by the system and to express requests in a form that 'comes naturally' without extensive forethought or problem solving. These characteristics lead to the necessity for a language structure that mirrors the user's conceptual model of the task and the equivalents of anaphoric reference, ellipsis, and context-dependent interpretation of requests. these in turn lead to requirements for handling large data bases of general world knowledge to support the necessary inferences. The project is seeking to develop techniques for representing and using real world knowledge in this context, and for combining it efficiently with syntactic and semantic knowledge. This report discusses aspects of research to date and a general approach to definite anaphoric reference and near-deterministic parsing strategies.", "By typing requests in English, casual users will be able to obtain explicit answers from a large relational database of aircraft flight and maintenance data using a system called PLANES. The design and implementation of this system is described and illustrated with detailed examples of the operation of system components and examples of overall system operation. The language processing portion of the system uses a number of augmented transition networks, each of which matches phrases with a specific meaning, along with context registers (history keepers) and concept case frames; these are used for judging meaningfulness of questions, generating dialogue for clarifying partially understood questions, and resolving ellipsis and pronoun reference problems. Other system components construct a formal query for the relational database, and optimize the order of searching relations. Methods are discussed for handling vague or complex questions and for providing browsing ability. Also included are discussions of important issues in programming natural language systems for limited domains, and the relationship of this system to others.", "Books reviewed in the AJCL will be those of interest to computat ional linguists; books in closely related disciplines may also be considered. The purpose of a book review is to inform readers about the content of the book and to present opinions on the choice of material, manner of presentat ion, and suitability for various readers and purposes. There is no limit to the length of reviews. The appropriate length is determined by its content. If you wish to review a specific book, please contact me before doing so to check that it is not already under review by someone else. If you want to be on a list of potential reviewers, please send me your name and mailing address together with a list of keywords summarizing your areas of interest. You can also suggest books to be reviewed without volunteering to be the reviewer.", "From the Publisher: \nFor over 25 years, C. J. Date's An Introduction to Database Systems has been the authoritative resource for readers interested in gaining insight into and understanding of the principles of database systems. This revision continues to provide a solid grounding in the foundations of database technology and to provide some ideas as to how the field is likely to develop in the future.. \"Readers of this book will gain a strong working knowledge of the overall structure, concepts, and objectives of database systems and will become familiar with the theoretical principles underlying the construction of such systems." ], "authors": [ { "name": [ "R. Milne" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "L. Lesmo", "D. Magnani", "P. Torasso" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Eugene Charniak" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Kwasny", "N. Sondheimer" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Weischedel", "J. Black" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "C. Sidner", "M. Bates", "R. Bobrow", "R. Brachman", "Philip R. Cohen", "David J. Israel", "B. Webber", "W. Woods" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Mitchell P. Marcus" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. Woods", "R. Brachman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Waltz" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Lyn Bates", "T. Winograd" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "C. J. Date" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "62447597", "35511574", "14027944", "181820", "18828496", "59852936", "6616065", "61138592", "18227465", "2209224", "227993896" ], "intents": [ [ "background" ], [], [ "background" ], [ "background" ], [ "background" ], [], [ "background" ], [], [ "background" ], [], [] ], "isInfluential": [ false, false, false, false, false, false, true, false, false, false, false ] }
Problem: The problem addressed in this paper is the accurate syntactic analysis of natural language sentences, particularly focusing on the challenge of analyzing ill-formed sentences. Solution: The paper proposes a parser that allows for the explicit representation of the interconnections between syntax and semantics, enabling the analysis of sentences in a quasi-deterministic manner. The parser aims to identify the roles of various constituents even in cases where the sentence is ill-formed.
497
0.028169
null
null
null
null
null
null
null
null
b2a75100f9c253674479ca767cee9b57f8b1b1fe
6848223
null
An Expert System for the Production of Phoneme Strings From Unmarked {E}nglish Text Using Machine-Induced Rules
The speech synthesis group at the Computer-Based Education Research Laboratory (CERL) of the University of Illinois at Urbana-Champalgn is developing a diphone speech synthesis system based on pltch-adaptive short-tlme Fourier transforms. This system accepts the phonemic specification of an utterance along with pitch, time, and amplitude warping functions in order to produce high quality speech output from stored dlphone templates.
{ "name": [ "Segre, Alberto Maria and", "Sherwood, Bruce Arne and", "Dickerson, Wayne B." ], "affiliation": [ null, null, null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
20
6
null
This paper describes the operation of a program which operates as a front end for the dlphone speech synthesis system. The UTTER (for "Unmarked Text Transcription by Expert Rule") system maps English text onto a phoneme string, which is then used as an input to the dlphone speech synthesis system. The program is a twotiered Expert System which operates first on the word level and then on the (vowel or consonant) cluster level.The system's knowledge about pronunciation is organized in two decision trees automatically generated by an induction algorithm on a dynamically specified "training set" of examples.in that they are often unable to cope with a letter pattern that maps onto more than one phoneme pattern. Extreme cases are those words which, although differing in pronunciation, share orthographic representations (an analogous problem exists in speech recognition, where words which share phonemic representations differ in orthographic representation, and therefore possibly in semantic interpretation).A notable exception is the MIT speech synthesis system fAllen81] which is llngulstlcally-based, but not solely phoneme-based.A desirable feature in any rule-based system is the ability to automatically acquire or modify its own rules. Previous work [Oakey81] applies this automatic inference process to the text-tophoneme transcription problem.Unfortunately, Onkey's system is strlctly letter-based and suffers from the same deficiencies as other nonilnguistlcally-based systems.The UTTER system is an attempt to provide a llngulstlcally-based transcription system which has the ability to automatically acquire its own rule base.Most speech synthesis systems in use today require that eventual utterances be specified in terms of phoneme strings. The automatic transformation of normal English texts into phoneme strings is therefore a useful front-end process for any speech synthesis unit which requires such phonemic utterance specification.Unfortunately, this transcription process is not nearly as straightforward as one might initially imagine. It is common knowledge to nonnatlve speakers that English poses some particularly treacherous pronunciation problems. This is due, in part, to the mixed heritage of the language, which shares several orthographic bloodlines.Past attempts to create orthographicallybased computer algorithms have not met with great success. Algorithms such as the Naval Research Laboratory pronunciation algorithm [Elovitz76] are letter-based instead of llnguistlcally-based. For this reason, such algorithms are excessively rigidThe system's basic goal is the transcription of input text into phoneme strings. The method used to accomplish this goal is based on a method taught to foreign students which enables them to properly pronounce unknown English words [DickersonF1, DickersonF2] . The method is basically a two stage process. The first stage consists in assigning major stress to one of the word's syllables. The second stage maps a vowel or consonant group with a known stress value uniquely onto its corresponding phoneme string. It is the stress-asslgnment process which distinguishes this pronunciation method from applying purely letterbased text-to-speech rules, as in, for example, the Naval Research Laboratory algorithm [Elovltz76].In order to accomplish the transcription of text into phoneme strings, the system uses a set of two transcription rules which are machine generated over a set of sample transcriptions. As the system transcribes new input texts, any improper transcriptions (i.e., mispronunciations) would be flagged by the user and added to the sample set for future generations of transcription rules.The first stage operates on "words "1 while the second stage operates on "clusters" of vowels or consonants. 2 Each word is examined individually, and "major stress "3 is assigned to one of the "syllables". ~ Major stress is assigned on the basis of certain "features" or "attrlbutes "5 extracted from the word (an example of a word-level attribute is "sufflx-type"). The assignment of major stress is always made uniquely for a given word. The assignment process consists of invoking and applying the "stress-rule".The "stress-rule" is one of two machinegenerated transcription rules, the other being the "cluster-rule". A transcription rule consists of a decision tree which, when invoked, is traversed on the basis of the feature values of the word or cluster under consideration.The transcription rule "test "6 is evaluated and the proper branch is then selected on the basis of values of the word features. The process is repeated until a leaf node of the tree is reached.The leaf node contains the value returned for that invocation of this transcription rule, which uniquely determines which syllable is to receive the major stress. I A "word" is delimited by conventional word separators such as common punctuation or blank spaces in the input stream.2 A "cluster" consists of contiguous vowels or contiguous consonants. The following classificatory scheme is used to determine if a letter is a vowel (-v-) or a consonant (-c-):"a m, "e", "i", and "o" are -v-, "u" is -v-unless it follows a "g" or "q", "i" is a special consonant represented by -i-, mr" is a special consonant represented by -r-, "y" is -v-if it follows -v-, -c-, -i-or -r-, "w" is -v-if it follows -v-.3 "Major stress" corresponds to that syllable which receives the most emphasis in spoken English.A "syllable" will be taken to be a set of two adjacent clusters, with the first cluster of the vowel type and the second cluster of the consonant type.For syllable division purposes, if the word begins with a consonant the first syllable in that word will consist solely of a consonant cluster. Similarly, if the word ends in a vowel then the final syllable will consist of a vowel cluster alone.In all other cases, a syllable will always consist of a vowel cluster followed by a consonant cluster.5 The terms "feature" and "attribute" will be used interchangeably to refer to some identifiable element in a word or cluster. For more information regarding word or cluster attributes see the following section. 6 A transcription rule "test" refers to the branching criteria at the current node.After word stress is assigned, each cluster within the word is considered sequentially. The cluster features are extracted, and the clusterrule is invoked and applied to obtain the phonemic transcription for that particular cluster. Note that one of the cluster features is the stress of the particular syllable to which the cluster belongs.In other words, it is necessary to determine major stress before it is possible to transcribe the individual clusters of which the word is comprised.The value returned from invoking the cluster rule is the phoneme string corresponding to the current cluster. The current implementation of UTTER operates in one of three modes, each of which corresponds to one of the three tasks required of the system: (I) execution mode: the transcription of input text usir~ existing transcription rules.(2) trainin~ mode: flagglr~ incorrect transcriptions for inclusion in the next generation of transcription rules.(3) inference mode: automatic induction of a new set of transcription rules to cover the set of training examples (including any additions made in/2.~~.What follows is a more detailed description of each of these three modes of operation.Execution mode is UTTER's normal mode of operation.While in execution mode, UTTER accepts English input one sentence at a time and produces the corresponding pronunciation as a list of phonemes.What follows is a detailed description of each step taken by UTTER when operating in execution mode.(I) The input text is scanned for word and cluster boundaries, and lists of pointers to boundary locations in the string are constructed.The parser also counts the number of syllables in each word, and constructs a new representation of the original string which consists only of the letters 'v', 'c', 'i', and 'r'.This new representation, which will be referred to as the "vowel-consonant mapping," or simply "v-c map," is the same length as the original input. Therefore, all pointers to the original string (such as those showing word and cluster boundaries) are also applicable to the v-c map. The v-c map will be used in the extraction of cluster features.(2) Each word is now processed individually. The first step is to determine whether the next word belongs to the group of "function words". 8 If the search through the function word list is successful, it will return the cross-listed pronunciation for that word. Table look -up provides time-efflclent transcription for this small class of words which have a very high frequency of occurrence in the English language, as well as highly irregular pronunciations. If the word is a function word, its pronunciation is added to the output and processing continues with the next word.Positioning of function words provides a valuable clue to the syntax of the input. Syntactic information is essential in dlsamblguating certain words. Although the current version of UTTER supports part-ofspeech distinctions, the current version of the parser fails to supply this information.A new version of UTTER should include a better parser which is capable of making these sorts of part-of-speech dlstlnctlons. 9 Such a parser need not be very accurate in terms of the proper assignment of words to part-of-speech classes. However, it must be capable of separating identically spelled words into different classes on the basis of function.These words often differ in pronunciation, such as "present" (N) and "present" (V) or "moderate" (N) and "moderate" (V). In other words, the parser need not classify these two words as noun and verb, as long as it makes some distinction between them.(3) Each word is now checked against another llst of words (with their associated pronunciations) called the "permanent exception llst," or PEL. The PEL provides the 8For a complete listing of function words see Appendix B. 9 It should be possible to model a new parser on an existing parser which already makes this sort of part-of-speech distinction. For example, the STYLE program developed at Bell Laboratories provides a tool for analyzing documents [CherryBO] and yleids more part-of-speech classes than would be required for UTTER's purposes. user with the opportunity to specify common domaln-speclflc words whose transcription would best be handled by table-look-up, without reconstructing the pronunciation of the word each time it is encountered.The time required to search this llst is relatively small (provided the size of the llst itself is not too large) compared to the time necessary for UTTER to transcribe the word normally.If the word is on the PEL, its pronunciation is returned by the search routine and added to the output. Processing continues with the next word. These features are both necessary and sufficient to assign major stress to any given word [Dickerson81] .Although a detailed account of the selection of these features is beyond the scope of this paper, an example of an input word and the appropriate attribute values should give the reader a better grasp of the word-level feature concept.Consider the input word "preeminent". The weak suffix "ent" is stripped. Key-syllable (final syllable excluding suffixes) is "in". Left-syllable (left of key-syllable)is "eem". Prefix ("pre") overlaps left-syllable ("eem") since they share an "e".Proper stress placement for the word "preeminent" is on the left-syllable.(5) The word and its attributes are checked against a list of exceptions to the current stress rule (called the "stress exception list" or SEL). This llst is normally empty, in which case checklng does not take place. Additions to the list can only be made in training mode (see below).If the word and its features are indexed on the SEL, the SEL search returns the proper stress in terms of the number 0 or -1. If stress is returned as 0, major stress falls on the key-syllable. If stress is returned as -I, major stress falls on the leftsyllable.(6) If the word does not appear on the SEL, then the current stress rule is applied. The stress rule is essentially a decision tree which is traversed on the basis of the values of the word's word level attributes. Application of the stress rule also returns either 0 or -I. These features are necessary and sufficient to classify a cluster [Dickerson82] .As before, an example of cluster level attributes is appropriate. Consider the cluster "ee" (from our sample word "preeminent").The cluster type is "vowel". The cluster orthography is "ee". The left neighbor cluster map is "cr"(v-c map of "pr"). The right neighbor cluster is "m". The right neighbor cluster map is "c" (v-c map of "m"). The cluster position is "word-prefix boundary". The cluster is inside the syllable with major stress (see above).(8) The cluster and its associated attributes are checked against a list of exceptions to the cluster rule (called the "cluster exception list" or CEL). This list is normally empty, and addltlons can only be made in training mode (see below). If the search through the CEL is successful, it will return the proper pronunciation for the particular cluster. The pronunciation (in terms of a WES phoneme string) is added to the output, and processing continues with the next cluster in the current word, or with the next word.(9) The cluster transcription rule is applied to the current cluster. As in the case of the stress rule, the cluster rule is a decision tree which is traversed on the basis of the values of the cluster level attributes. The cluster rule returns the proper pronunciation for this particular cluster and adds it (in terms of a WES phoneme string) to the output. Processing continues with the next cluster in the current word, or with t~ next word in the input.When UTTER is operating in training mode, the system allows the user to correct errors in transcription interactively by specifying the proper pronunciation for the incorrectly transcribed word.The training mode operates in the same manner as the execution mode with the exception that, whenever either rule is applied (see steps 6 and 9 above), the user is prompted for a judgement on the accuracy of the rule. The user functions as the "oracle" who has the final word on what is to be considered proper pronunciation.Let us assume, for example, that the stress rule applied to a given word yields the result "stress left-syllable" (in other words, the rule application routine returns a -I) and the proper result should be "stress key-syllable" (or a result of 0). If the system were operating in execution mode, processing would continue and it is unlikely that the word would be properly transcribed. The user could switch to training mode and repeat the transcription of the problem word in the same context.In training mode, the user has the opportunity to inspect the results from every rule application, allowing the user to flag incorrect results.When an incorrect rule result is detected, the proper result (alone with the current features) will be saved on the appropriate exception list. In terms of the previous example, the current word and word-level features would be saved on the SEL.If the given word should arise again in the same context, the SEL would contain the exception to the transcription rule, prohibiting the application of the stress rule. The information from the SEL (and from the CEL at the clusterlevel) will be used to infer the next generation of transcription rules.It is important to note that UTTER makes a given mistake only once. If the transcription error is spotted and added to the SEL (or CEL, depending on which transcription rule is at fault) it will not be repeated as long as the exception information exists. The SEL (and CEL) can only be cleared by the rule inference process (see below) which guarantees that the new generation of rules will cover any example that is to be removed from the appropriate exception llst. In addition, consider the feature set [has-fur, llves-ln-water, can-fly, is-warmblooded] and assume there exists a method for extracting values for each feature of every entry in the training set (in this example, values would be "true" or "false" but this need not always be so). From this information, the inference routine would extract a decision tree whose branch nodes would be tests of the form "if has-fur is true then branch-left else branch-rlght" and whose terminal nodes would be of the form "the animal in question is a mammal." The premls is that such a decision tree would be capable of correctly classifying not only the examples contained in the training set but any other example whose feature values are known or extractable. I0What follows is a step-by-step description of the inference algorithm as applied to the generation of the stress transcription rule. Generation of the cluster transcription rule is similar, except that the cluster transcription rule returns a phoneme string rather than a number. For a more complete discussion of the inference algorithm, which would be beyond the scope of this paper, see [Qulnlan79] .(I) The current stress exception llst is combined with the training set used to generate the previous stress transcription rule. The old training set is referred to as the "stress classified llst," or SCL, and is stored following rule generatlon. 11 Since the SCL is not used again until a new rule is generated, it can be stored on an inexpensive remote device, such as magnetic tape. The SCL (as well as the CCL) tends to become quite large. 12 10 The inference algorithm need not be time-or space-efflclent. In fact, in the current implementation of UTTER, it is neither. This observation is not particularly alarming, since inference mode is not used very often, in comparison to execution or training modes (where space-and timeefficiency are particularly vital to fast text transcription). There are some inference systems [Oakey81] in which the inference routine is somewhat streamlined and not nearly as inefficient as in the case of the current implementation. Future versions of UTTER might consider using a more streamlined inference routine. However, since the inference routine need not be invoked very often, its inefficiency does not have any effect on what the user percleves as transcription time.11 The equivalent llst in the cluster transcription rule case is called the "cluster classified llst," or CCL.12 It should be possible to use an existing computer encoded pronunciation dictionary (or a subset thereof) to provide the initial SCL and CCL. The current version of UTTER uses null lists as the initial SCL and CCL, and therefore forces the user to build these lists via the SEL and CEL. This implies a rather time consuming process of running text through UTTER in training mode. An(2) Features are extracted for each of the entries in the training set. Features which cannot be extracted in isolation, such as the part-of-speech of a given word, are stored along with the entry and its result in the SEL. These unextractable attributes rely on the context the entry appeared in rather than on the entry itself and, therefore, cannot be reconstructed "a posterlori."The training set now consists of all of the entries from the SCL and the SEL, as well as all of the features for each entry. At this point an initial "window" on the training set is chosen. Since the inference algorithm's execution time increases comblnatorlally with the size of the training set, it is wise to begin the inference procedure with a subset of the training set. This is acceptable since there is often a relatively high rate of redundancy in the training set. The selection of the window may be done arbitrarily (as in the current version of UTTER), or one might try to select an initial window with the widest possible set of feature values. 13(3) For each "attrlbute-value "14 in the current window a "desirability index" is computed. This index dlrectiy reflects the ability of a test on the attrlbute-value to spilt the window into two relatively even subwindows.The current version of UTTER uses a desirability index which is defined as: samples with this attribute-value distinct final values in this subset.Different desirability indices might be substituted to reflect the information content of attrlbute-vaiues.When generating rules using UTTER the user has the option of using either only a test for equality in the decision tree, or a larger set of tests containing "equals," "not-equals," "less-than," and "greaterthan".If the larger set of possible tests is used, then the inference routine takes existing pronunciation dictionary would allow training mode to be used rather infrequently, and then only to make more subtle corrections to the transcription rules.13 The selection of all those examples which have unique combinations of feature values should reduce the number of iterations required in the inference routine by eliminating redundant entries in the training set. This type of training set pruning should be done at the same time the training set is scanned for clashes (discussed below).14 An "attribute-value" refers to the value of a feature or attribute for the given example. For instance, let the attribute in question be the word-level attribute "part-of-speech" and assume it may take one of five possible values (noun, verb, adjective, adverb, or function word). If this attribute appears with only three values (such as noun, verb, adjective) in the current window, then only those three attrlbute-values need be considered. much longer to execute. However, the decision trees generated uslr~ the larger set are often smaller and therefore usually faster to traverse.(4) The attrlbute-value with the greatest desirability index is chosen as the next test in the decision tree. This test is added to the decision tree. In this manner, examples occurring most frequently will take the least amount of time to classify and, thus, to transcribe. 15(5) The current window is split into two subwlndows. The spilt is based on which examples in the window contain the attrlbute-value selected as the new test, and which examples do not.(6) For each subwlndow, it is determined whether there is only one result value in a given subwlndow (i.e., is the result uniform on the window?) or whether there is more than one result.(7) If there is more than one result in a subwlndow, this procedure is applied recurslvely with the subwlndow as the new window.If there is only one result across a given subwlndow, then generate a "terminal" or "leaf" node for the decision tree which returns this singular result as the value of the tree at that terminal.Terminal nodes are thus easily recognized since they have only one distinct result.(8) When the original window is completely classified the resulting decision tree is the new rule which is gUaranteed to cover the original window.The newly generated rule is applied to the remaining examples in the training set. From the examples it fails to correctly classify, a subset of the failures is chosen for addition to the previous iteratlon's starting window. The inference algorithm is reapplled using this new starting window.(9) When no failures exist, the most recently generated decision tree completely covers the training set. In this case, the training set then becomes the SCL, and is stored in remote storage until the next rule generating session.The most recently generated decision tree becomes the new rule and the SEL is zeroed.It is, of course, possible to terminate the inference algorithm before it completely classifies the training set. In this case, UTTER simply places all of the "failures" on the SEL and all of the properly classified examples from the training set on the SCL. In this fashion it is 15 In certain pathological cases, the tree generated is not optimal in terms of traversai time. This problem has not yet occurred with real transcription data, and, in any case, would still yield an acceptable, though less than optimal, decision tree. possible to reduce the size of the SEL without exhaustively classifying the entire training set. The procedure for creating a cluster rule is identical.In the course of rule generation, an inconsistency called a "clash" may arise when the attributes are insufficient to classify two or more examples.A clash manifests itself as a window with uniform values for all of the attributes, but with more than one result present in the window. The current version of UTTER aborts the rule generation process when a clash occurs. Future versions of UTTER should screen the entire training set for clashes before starting the rule generation process, as well as allow the user to remove or correct the entries responsible for the clash.Clashes are usually the result of an error made by the user in training mode. If a clash should arise which is not the result of a user error, it would indicate that the attribute set is insufficient to characterize the set of transcriptions. Additional attributes would have to be added to UTTER in order to handle this event.For example, the word "read" is pronounced differently in present tense than it is in past tense. Since UTTER cannot extract contextual or semantic informatlon, the distinction cannot be made. Therefore, two entries in the training set might be present with the came attributes, but different transcriptions. This situation results in a clash which cannot be resolved without the addition of another attribute, such as "tense." Fortunately, such cases account for a very small portion of the English language.This paper has described a newly developed system for the transcription of unmarked Er~lish text into strings of phonemes for eventual Computer speech output. The current implementation of the system has shown this technique to be feasible in terms of speed of execution and storage requirements, and desirable in terms of transcription accuracy.One of the unique features of UTTER is the possibility of creating "mlnl-lmplementatlons" of UTTER for use on evermore popular micro computers. These reduced versions of UTTER would only need to provide execution mode.The two transcription rules could be developed on a full-scale system, and provided to the user on floppy diskettes for use on a micro computer. The micro systems need not provide a training mode, so no SEL or CEL need be retained (or checked during the transcription process). The PEL should still be provided so the user could tailor the operation of the system to the particular application by adding domainspecific words to this list. The micro systems need not supply an inference mode which requires the most processor time and memory space of all the modes of operation. Updated rules (on floppy diskettes) could be provided perlodlcaily from the main system --thus keeping memory and storage requirements well within the capabilities of today's micro computers.Accurate phoneme string transcription from ur~arked text will become increasingly vital as speech synthesis technology continues to improve. Better speech synthesis tools will encourage the trend from dlgltally-encoded recorded messages (as well as other phrase-or word-based computer speech methods) towards sub-word synthetic speech methods (such as diphone or phoneme based synthesis).The UTTER system is an example of a new approach to this old problem, embodying features from both the linguistic and artificial intelligence communities. For a complete listing of the World English Spelling phonetic alphabet see Appendix A.
null
null
null
null
Main paper: : This paper describes the operation of a program which operates as a front end for the dlphone speech synthesis system. The UTTER (for "Unmarked Text Transcription by Expert Rule") system maps English text onto a phoneme string, which is then used as an input to the dlphone speech synthesis system. The program is a twotiered Expert System which operates first on the word level and then on the (vowel or consonant) cluster level.The system's knowledge about pronunciation is organized in two decision trees automatically generated by an induction algorithm on a dynamically specified "training set" of examples.in that they are often unable to cope with a letter pattern that maps onto more than one phoneme pattern. Extreme cases are those words which, although differing in pronunciation, share orthographic representations (an analogous problem exists in speech recognition, where words which share phonemic representations differ in orthographic representation, and therefore possibly in semantic interpretation).A notable exception is the MIT speech synthesis system fAllen81] which is llngulstlcally-based, but not solely phoneme-based.A desirable feature in any rule-based system is the ability to automatically acquire or modify its own rules. Previous work [Oakey81] applies this automatic inference process to the text-tophoneme transcription problem.Unfortunately, Onkey's system is strlctly letter-based and suffers from the same deficiencies as other nonilnguistlcally-based systems.The UTTER system is an attempt to provide a llngulstlcally-based transcription system which has the ability to automatically acquire its own rule base.Most speech synthesis systems in use today require that eventual utterances be specified in terms of phoneme strings. The automatic transformation of normal English texts into phoneme strings is therefore a useful front-end process for any speech synthesis unit which requires such phonemic utterance specification.Unfortunately, this transcription process is not nearly as straightforward as one might initially imagine. It is common knowledge to nonnatlve speakers that English poses some particularly treacherous pronunciation problems. This is due, in part, to the mixed heritage of the language, which shares several orthographic bloodlines.Past attempts to create orthographicallybased computer algorithms have not met with great success. Algorithms such as the Naval Research Laboratory pronunciation algorithm [Elovitz76] are letter-based instead of llnguistlcally-based. For this reason, such algorithms are excessively rigidThe system's basic goal is the transcription of input text into phoneme strings. The method used to accomplish this goal is based on a method taught to foreign students which enables them to properly pronounce unknown English words [DickersonF1, DickersonF2] . The method is basically a two stage process. The first stage consists in assigning major stress to one of the word's syllables. The second stage maps a vowel or consonant group with a known stress value uniquely onto its corresponding phoneme string. It is the stress-asslgnment process which distinguishes this pronunciation method from applying purely letterbased text-to-speech rules, as in, for example, the Naval Research Laboratory algorithm [Elovltz76].In order to accomplish the transcription of text into phoneme strings, the system uses a set of two transcription rules which are machine generated over a set of sample transcriptions. As the system transcribes new input texts, any improper transcriptions (i.e., mispronunciations) would be flagged by the user and added to the sample set for future generations of transcription rules.The first stage operates on "words "1 while the second stage operates on "clusters" of vowels or consonants. 2 Each word is examined individually, and "major stress "3 is assigned to one of the "syllables". ~ Major stress is assigned on the basis of certain "features" or "attrlbutes "5 extracted from the word (an example of a word-level attribute is "sufflx-type"). The assignment of major stress is always made uniquely for a given word. The assignment process consists of invoking and applying the "stress-rule".The "stress-rule" is one of two machinegenerated transcription rules, the other being the "cluster-rule". A transcription rule consists of a decision tree which, when invoked, is traversed on the basis of the feature values of the word or cluster under consideration.The transcription rule "test "6 is evaluated and the proper branch is then selected on the basis of values of the word features. The process is repeated until a leaf node of the tree is reached.The leaf node contains the value returned for that invocation of this transcription rule, which uniquely determines which syllable is to receive the major stress. I A "word" is delimited by conventional word separators such as common punctuation or blank spaces in the input stream.2 A "cluster" consists of contiguous vowels or contiguous consonants. The following classificatory scheme is used to determine if a letter is a vowel (-v-) or a consonant (-c-):"a m, "e", "i", and "o" are -v-, "u" is -v-unless it follows a "g" or "q", "i" is a special consonant represented by -i-, mr" is a special consonant represented by -r-, "y" is -v-if it follows -v-, -c-, -i-or -r-, "w" is -v-if it follows -v-.3 "Major stress" corresponds to that syllable which receives the most emphasis in spoken English.A "syllable" will be taken to be a set of two adjacent clusters, with the first cluster of the vowel type and the second cluster of the consonant type.For syllable division purposes, if the word begins with a consonant the first syllable in that word will consist solely of a consonant cluster. Similarly, if the word ends in a vowel then the final syllable will consist of a vowel cluster alone.In all other cases, a syllable will always consist of a vowel cluster followed by a consonant cluster.5 The terms "feature" and "attribute" will be used interchangeably to refer to some identifiable element in a word or cluster. For more information regarding word or cluster attributes see the following section. 6 A transcription rule "test" refers to the branching criteria at the current node.After word stress is assigned, each cluster within the word is considered sequentially. The cluster features are extracted, and the clusterrule is invoked and applied to obtain the phonemic transcription for that particular cluster. Note that one of the cluster features is the stress of the particular syllable to which the cluster belongs.In other words, it is necessary to determine major stress before it is possible to transcribe the individual clusters of which the word is comprised.The value returned from invoking the cluster rule is the phoneme string corresponding to the current cluster. The current implementation of UTTER operates in one of three modes, each of which corresponds to one of the three tasks required of the system: (I) execution mode: the transcription of input text usir~ existing transcription rules.(2) trainin~ mode: flagglr~ incorrect transcriptions for inclusion in the next generation of transcription rules.(3) inference mode: automatic induction of a new set of transcription rules to cover the set of training examples (including any additions made in/2.~~.What follows is a more detailed description of each of these three modes of operation.Execution mode is UTTER's normal mode of operation.While in execution mode, UTTER accepts English input one sentence at a time and produces the corresponding pronunciation as a list of phonemes.What follows is a detailed description of each step taken by UTTER when operating in execution mode.(I) The input text is scanned for word and cluster boundaries, and lists of pointers to boundary locations in the string are constructed.The parser also counts the number of syllables in each word, and constructs a new representation of the original string which consists only of the letters 'v', 'c', 'i', and 'r'.This new representation, which will be referred to as the "vowel-consonant mapping," or simply "v-c map," is the same length as the original input. Therefore, all pointers to the original string (such as those showing word and cluster boundaries) are also applicable to the v-c map. The v-c map will be used in the extraction of cluster features.(2) Each word is now processed individually. The first step is to determine whether the next word belongs to the group of "function words". 8 If the search through the function word list is successful, it will return the cross-listed pronunciation for that word. Table look -up provides time-efflclent transcription for this small class of words which have a very high frequency of occurrence in the English language, as well as highly irregular pronunciations. If the word is a function word, its pronunciation is added to the output and processing continues with the next word.Positioning of function words provides a valuable clue to the syntax of the input. Syntactic information is essential in dlsamblguating certain words. Although the current version of UTTER supports part-ofspeech distinctions, the current version of the parser fails to supply this information.A new version of UTTER should include a better parser which is capable of making these sorts of part-of-speech dlstlnctlons. 9 Such a parser need not be very accurate in terms of the proper assignment of words to part-of-speech classes. However, it must be capable of separating identically spelled words into different classes on the basis of function.These words often differ in pronunciation, such as "present" (N) and "present" (V) or "moderate" (N) and "moderate" (V). In other words, the parser need not classify these two words as noun and verb, as long as it makes some distinction between them.(3) Each word is now checked against another llst of words (with their associated pronunciations) called the "permanent exception llst," or PEL. The PEL provides the 8For a complete listing of function words see Appendix B. 9 It should be possible to model a new parser on an existing parser which already makes this sort of part-of-speech distinction. For example, the STYLE program developed at Bell Laboratories provides a tool for analyzing documents [CherryBO] and yleids more part-of-speech classes than would be required for UTTER's purposes. user with the opportunity to specify common domaln-speclflc words whose transcription would best be handled by table-look-up, without reconstructing the pronunciation of the word each time it is encountered.The time required to search this llst is relatively small (provided the size of the llst itself is not too large) compared to the time necessary for UTTER to transcribe the word normally.If the word is on the PEL, its pronunciation is returned by the search routine and added to the output. Processing continues with the next word. These features are both necessary and sufficient to assign major stress to any given word [Dickerson81] .Although a detailed account of the selection of these features is beyond the scope of this paper, an example of an input word and the appropriate attribute values should give the reader a better grasp of the word-level feature concept.Consider the input word "preeminent". The weak suffix "ent" is stripped. Key-syllable (final syllable excluding suffixes) is "in". Left-syllable (left of key-syllable)is "eem". Prefix ("pre") overlaps left-syllable ("eem") since they share an "e".Proper stress placement for the word "preeminent" is on the left-syllable.(5) The word and its attributes are checked against a list of exceptions to the current stress rule (called the "stress exception list" or SEL). This llst is normally empty, in which case checklng does not take place. Additions to the list can only be made in training mode (see below).If the word and its features are indexed on the SEL, the SEL search returns the proper stress in terms of the number 0 or -1. If stress is returned as 0, major stress falls on the key-syllable. If stress is returned as -I, major stress falls on the leftsyllable.(6) If the word does not appear on the SEL, then the current stress rule is applied. The stress rule is essentially a decision tree which is traversed on the basis of the values of the word's word level attributes. Application of the stress rule also returns either 0 or -I. These features are necessary and sufficient to classify a cluster [Dickerson82] .As before, an example of cluster level attributes is appropriate. Consider the cluster "ee" (from our sample word "preeminent").The cluster type is "vowel". The cluster orthography is "ee". The left neighbor cluster map is "cr"(v-c map of "pr"). The right neighbor cluster is "m". The right neighbor cluster map is "c" (v-c map of "m"). The cluster position is "word-prefix boundary". The cluster is inside the syllable with major stress (see above).(8) The cluster and its associated attributes are checked against a list of exceptions to the cluster rule (called the "cluster exception list" or CEL). This list is normally empty, and addltlons can only be made in training mode (see below). If the search through the CEL is successful, it will return the proper pronunciation for the particular cluster. The pronunciation (in terms of a WES phoneme string) is added to the output, and processing continues with the next cluster in the current word, or with the next word.(9) The cluster transcription rule is applied to the current cluster. As in the case of the stress rule, the cluster rule is a decision tree which is traversed on the basis of the values of the cluster level attributes. The cluster rule returns the proper pronunciation for this particular cluster and adds it (in terms of a WES phoneme string) to the output. Processing continues with the next cluster in the current word, or with t~ next word in the input.When UTTER is operating in training mode, the system allows the user to correct errors in transcription interactively by specifying the proper pronunciation for the incorrectly transcribed word.The training mode operates in the same manner as the execution mode with the exception that, whenever either rule is applied (see steps 6 and 9 above), the user is prompted for a judgement on the accuracy of the rule. The user functions as the "oracle" who has the final word on what is to be considered proper pronunciation.Let us assume, for example, that the stress rule applied to a given word yields the result "stress left-syllable" (in other words, the rule application routine returns a -I) and the proper result should be "stress key-syllable" (or a result of 0). If the system were operating in execution mode, processing would continue and it is unlikely that the word would be properly transcribed. The user could switch to training mode and repeat the transcription of the problem word in the same context.In training mode, the user has the opportunity to inspect the results from every rule application, allowing the user to flag incorrect results.When an incorrect rule result is detected, the proper result (alone with the current features) will be saved on the appropriate exception list. In terms of the previous example, the current word and word-level features would be saved on the SEL.If the given word should arise again in the same context, the SEL would contain the exception to the transcription rule, prohibiting the application of the stress rule. The information from the SEL (and from the CEL at the clusterlevel) will be used to infer the next generation of transcription rules.It is important to note that UTTER makes a given mistake only once. If the transcription error is spotted and added to the SEL (or CEL, depending on which transcription rule is at fault) it will not be repeated as long as the exception information exists. The SEL (and CEL) can only be cleared by the rule inference process (see below) which guarantees that the new generation of rules will cover any example that is to be removed from the appropriate exception llst. In addition, consider the feature set [has-fur, llves-ln-water, can-fly, is-warmblooded] and assume there exists a method for extracting values for each feature of every entry in the training set (in this example, values would be "true" or "false" but this need not always be so). From this information, the inference routine would extract a decision tree whose branch nodes would be tests of the form "if has-fur is true then branch-left else branch-rlght" and whose terminal nodes would be of the form "the animal in question is a mammal." The premls is that such a decision tree would be capable of correctly classifying not only the examples contained in the training set but any other example whose feature values are known or extractable. I0What follows is a step-by-step description of the inference algorithm as applied to the generation of the stress transcription rule. Generation of the cluster transcription rule is similar, except that the cluster transcription rule returns a phoneme string rather than a number. For a more complete discussion of the inference algorithm, which would be beyond the scope of this paper, see [Qulnlan79] .(I) The current stress exception llst is combined with the training set used to generate the previous stress transcription rule. The old training set is referred to as the "stress classified llst," or SCL, and is stored following rule generatlon. 11 Since the SCL is not used again until a new rule is generated, it can be stored on an inexpensive remote device, such as magnetic tape. The SCL (as well as the CCL) tends to become quite large. 12 10 The inference algorithm need not be time-or space-efflclent. In fact, in the current implementation of UTTER, it is neither. This observation is not particularly alarming, since inference mode is not used very often, in comparison to execution or training modes (where space-and timeefficiency are particularly vital to fast text transcription). There are some inference systems [Oakey81] in which the inference routine is somewhat streamlined and not nearly as inefficient as in the case of the current implementation. Future versions of UTTER might consider using a more streamlined inference routine. However, since the inference routine need not be invoked very often, its inefficiency does not have any effect on what the user percleves as transcription time.11 The equivalent llst in the cluster transcription rule case is called the "cluster classified llst," or CCL.12 It should be possible to use an existing computer encoded pronunciation dictionary (or a subset thereof) to provide the initial SCL and CCL. The current version of UTTER uses null lists as the initial SCL and CCL, and therefore forces the user to build these lists via the SEL and CEL. This implies a rather time consuming process of running text through UTTER in training mode. An(2) Features are extracted for each of the entries in the training set. Features which cannot be extracted in isolation, such as the part-of-speech of a given word, are stored along with the entry and its result in the SEL. These unextractable attributes rely on the context the entry appeared in rather than on the entry itself and, therefore, cannot be reconstructed "a posterlori."The training set now consists of all of the entries from the SCL and the SEL, as well as all of the features for each entry. At this point an initial "window" on the training set is chosen. Since the inference algorithm's execution time increases comblnatorlally with the size of the training set, it is wise to begin the inference procedure with a subset of the training set. This is acceptable since there is often a relatively high rate of redundancy in the training set. The selection of the window may be done arbitrarily (as in the current version of UTTER), or one might try to select an initial window with the widest possible set of feature values. 13(3) For each "attrlbute-value "14 in the current window a "desirability index" is computed. This index dlrectiy reflects the ability of a test on the attrlbute-value to spilt the window into two relatively even subwindows.The current version of UTTER uses a desirability index which is defined as: samples with this attribute-value distinct final values in this subset.Different desirability indices might be substituted to reflect the information content of attrlbute-vaiues.When generating rules using UTTER the user has the option of using either only a test for equality in the decision tree, or a larger set of tests containing "equals," "not-equals," "less-than," and "greaterthan".If the larger set of possible tests is used, then the inference routine takes existing pronunciation dictionary would allow training mode to be used rather infrequently, and then only to make more subtle corrections to the transcription rules.13 The selection of all those examples which have unique combinations of feature values should reduce the number of iterations required in the inference routine by eliminating redundant entries in the training set. This type of training set pruning should be done at the same time the training set is scanned for clashes (discussed below).14 An "attribute-value" refers to the value of a feature or attribute for the given example. For instance, let the attribute in question be the word-level attribute "part-of-speech" and assume it may take one of five possible values (noun, verb, adjective, adverb, or function word). If this attribute appears with only three values (such as noun, verb, adjective) in the current window, then only those three attrlbute-values need be considered. much longer to execute. However, the decision trees generated uslr~ the larger set are often smaller and therefore usually faster to traverse.(4) The attrlbute-value with the greatest desirability index is chosen as the next test in the decision tree. This test is added to the decision tree. In this manner, examples occurring most frequently will take the least amount of time to classify and, thus, to transcribe. 15(5) The current window is split into two subwlndows. The spilt is based on which examples in the window contain the attrlbute-value selected as the new test, and which examples do not.(6) For each subwlndow, it is determined whether there is only one result value in a given subwlndow (i.e., is the result uniform on the window?) or whether there is more than one result.(7) If there is more than one result in a subwlndow, this procedure is applied recurslvely with the subwlndow as the new window.If there is only one result across a given subwlndow, then generate a "terminal" or "leaf" node for the decision tree which returns this singular result as the value of the tree at that terminal.Terminal nodes are thus easily recognized since they have only one distinct result.(8) When the original window is completely classified the resulting decision tree is the new rule which is gUaranteed to cover the original window.The newly generated rule is applied to the remaining examples in the training set. From the examples it fails to correctly classify, a subset of the failures is chosen for addition to the previous iteratlon's starting window. The inference algorithm is reapplled using this new starting window.(9) When no failures exist, the most recently generated decision tree completely covers the training set. In this case, the training set then becomes the SCL, and is stored in remote storage until the next rule generating session.The most recently generated decision tree becomes the new rule and the SEL is zeroed.It is, of course, possible to terminate the inference algorithm before it completely classifies the training set. In this case, UTTER simply places all of the "failures" on the SEL and all of the properly classified examples from the training set on the SCL. In this fashion it is 15 In certain pathological cases, the tree generated is not optimal in terms of traversai time. This problem has not yet occurred with real transcription data, and, in any case, would still yield an acceptable, though less than optimal, decision tree. possible to reduce the size of the SEL without exhaustively classifying the entire training set. The procedure for creating a cluster rule is identical.In the course of rule generation, an inconsistency called a "clash" may arise when the attributes are insufficient to classify two or more examples.A clash manifests itself as a window with uniform values for all of the attributes, but with more than one result present in the window. The current version of UTTER aborts the rule generation process when a clash occurs. Future versions of UTTER should screen the entire training set for clashes before starting the rule generation process, as well as allow the user to remove or correct the entries responsible for the clash.Clashes are usually the result of an error made by the user in training mode. If a clash should arise which is not the result of a user error, it would indicate that the attribute set is insufficient to characterize the set of transcriptions. Additional attributes would have to be added to UTTER in order to handle this event.For example, the word "read" is pronounced differently in present tense than it is in past tense. Since UTTER cannot extract contextual or semantic informatlon, the distinction cannot be made. Therefore, two entries in the training set might be present with the came attributes, but different transcriptions. This situation results in a clash which cannot be resolved without the addition of another attribute, such as "tense." Fortunately, such cases account for a very small portion of the English language.This paper has described a newly developed system for the transcription of unmarked Er~lish text into strings of phonemes for eventual Computer speech output. The current implementation of the system has shown this technique to be feasible in terms of speed of execution and storage requirements, and desirable in terms of transcription accuracy.One of the unique features of UTTER is the possibility of creating "mlnl-lmplementatlons" of UTTER for use on evermore popular micro computers. These reduced versions of UTTER would only need to provide execution mode.The two transcription rules could be developed on a full-scale system, and provided to the user on floppy diskettes for use on a micro computer. The micro systems need not provide a training mode, so no SEL or CEL need be retained (or checked during the transcription process). The PEL should still be provided so the user could tailor the operation of the system to the particular application by adding domainspecific words to this list. The micro systems need not supply an inference mode which requires the most processor time and memory space of all the modes of operation. Updated rules (on floppy diskettes) could be provided perlodlcaily from the main system --thus keeping memory and storage requirements well within the capabilities of today's micro computers.Accurate phoneme string transcription from ur~arked text will become increasingly vital as speech synthesis technology continues to improve. Better speech synthesis tools will encourage the trend from dlgltally-encoded recorded messages (as well as other phrase-or word-based computer speech methods) towards sub-word synthetic speech methods (such as diphone or phoneme based synthesis).The UTTER system is an example of a new approach to this old problem, embodying features from both the linguistic and artificial intelligence communities. For a complete listing of the World English Spelling phonetic alphabet see Appendix A. Appendix:
null
null
null
null
{ "paperhash": [ "oakey|inductive_learning_of_pronunciation_rules_by_hypothesis_testing_and_correction", "elovitz|letter-to-sound_rules_for_automatic_translation_of_english_text_to_phonetics", "glinski|diphone_speech_synthesis_based_on_a_pitch-adaptive_short-time_fourier_transform" ], "title": [ "Inductive Learning of Pronunciation Rules by hypothesis Testing and Correction", "Letter-to-sound rules for automatic translation of english text to phonetics", "Diphone Speech Synthesis Based on a Pitch-Adaptive Short-Time Fourier Transform" ], "abstract": [ "This paper describes a system that learns the rules of pronunciation inductively. It begins with a set of 26 rules for single-letter pronunciation. Individual words are presented to it, and the system uses its rule set to hypothesise a pronunciation. This is compared with a dictionary pronunciation, and if any part of the pronunciation is incorrect new rules are created to handle the word as an exception condition. \n \nThese rules are checked for similarity with others already produced, and where suitable a \"general\" rule is produced to deal with two or more created rules. The effect is to produce rules that are more and more general, and these approach the general pronunciation rule sets that have been produced manually by other workers.", "Speech synthesizers for computer voice output are most useful when not restricted to a prestored vocabulary. The simplest approach to unrestricted text-to-speech translation uses a small set of letter-to-sound rules, each specifying a pronunciation for one or more letters in some context. Unless this approach yields sufficient intelligibility, routine addition of text-to-speech translation to computer systems is unlikely, since more elaborate approaches, embodying large pronunciation dictionaries or linguistic analysis, require too much of the available computing resources. The work here described demonstrates the practicality of routine text-to-speech translation. A set of 329 letter-to-sound rules has been developed. These translate English text into the international phonetic alphabet (IPA), producing correct pronunciations for approximately 90 percent of the words, or nearly 97 percent of the phonemes, in an average text sample. Most of the remaining words have single errors easily correctable by the listener. Another set of rules translates IPA into the phonetic coding for a particular commercial speech synthesizer. This report describes the technical approach used and the support hardware and software developed. It gives overall performance figures, detailed statistics showing the importance of each rule, and listings of a translation program and another used in rule development.", "The purpose of this work is to investigate a new method of speech synthesis from phonetic specifications. The investigation includes the design, computer simulation, and subjective evaluation of a speech analysis-synthesis system. The method is new in the sense that it utilizes two novel analytical techniques: (1) discrete pitch-adaptive short-time Fourier analysis, and (2) diphone representation of real speech. \nThe pitch-adaptive transformation is implemented via a sliding rectangular window whose edges are located at zero crossings of the speech signal, and whose length is one pitch period for voiced regions and constant for fricative regions. This approach is shown to result in a more accurate spectral representation and to offer possibilities for data compression. Algorithms are developed for dynamic pitch, intensity, and time axis warping of the signal during synthesis. \nUsing the adaptive transform, the author's voice is analyzed to produce several diphone templates. These templates are concatenated and smoothed to form synthetic English speech. Results indicate that by using aforementioned techniques, it is possible to produce very intelligible synthetic speech which retains, to a limited extent, the voice quality of the original speaker." ], "authors": [ { "name": [ "S. Oakey", "R. Cawthorn" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "H. Elovitz", "R. Johnson", "A. McHugh", "J. Shore" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Glinski" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null ], "s2_corpus_id": [ "18376731", "62532275", "112104833" ], "intents": [ [], [], [] ], "isInfluential": [ false, false, false ] }
null
497
0.012072
null
null
null
null
null
null
null
null
2f7d8bfc6394c67f97c58cf3e6a08ec62ffa5573
129750
null
The Generation of Term Definitions From an On-Line Terminological Thesaurus
A new type of machine dictionary is described, which uses terminological relations to build up a semantic network representing the terms of a particular subject field, through interaction with the user. These relations are then used to dynamically generate outline definitions of terms in on-line query mode. The definitions produced are precise, consistent and informative, and allow the user to situate a query term in the local conceptual ~vironment. The simple definitions based on terminological relations are supplemented by information contained in facets and modifiers, which allow the user to capture different views of the data.
{ "name": [ "McNaught, John" ], "affiliation": [ null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
8
1
null
This paper describes an on-golng project being carried cot at U~T, which is concerned with the nature, constructicn and use of specialised machine dictionaries, concentrating on one particular type, the terminological thesanrus (Sager, 1981) . ~ system described here is capable of c~ynem~ically producing outline definitions of technical terms, and it is this feature which distinguishes it from other automated dictionaries.These traditional reference tools, while often containing hi~n quality terminology collected after painstaking research, do not in the normal case afford the user an overall conceptual view of the subject field, as they exhibit a relative lack of structure. Moreover, due to the limitations of the printed page, the form%t of the entries is fixed, such that 1~sers with differing information needs are obliged to search through to them irrelevant data. The only real aid ~ich allows the user to place a term roughly in the local conceptual environment is the conve~.tional def~_nition. However, such definitions tend to be idiosyncratic, inconsistent and nonrigorous, especially if the subject field is of any great size. Contexts, while of some help, are * sponsored by the Department of Education and Science through the award of a Research Fellowship in fx~fo1~tion Science notoriously difficult to find and control, and should only be seen as supplementary to a rigorous definiticn of the term which firmly places the term in the conceptual space. Those reference tools containing definitions which are rigorous exist mainly in the form of glossaries established by standards bodies. However, standardisation of terminologies is a slow affair, and is restricted to certain key terms or fields, such that no overall conceptual structure r,~y be obtained from these glossaries.Banks offer many advantages over traditional dictionaries, and are becoming more and more common, especially among organisations which have urgent terminolo~ needs, such as the Co~ssicn of the European Communities or the electronics firm Siemens AG in W. Germar~y. National bodies likewise use term banks to control the creation and dissemination of new standaraised terms, e.g. AFNOR (France) and DIN (W. Germany). In the UK, work is going ahead, coordinated by UMIST and the British Library, to set up a British Term Bank. Other in~portant term banks exist in Denmark (DANTe) and Sweden (TE~K).However, despite this growth in the number of term b~nks and other computer based dictionaries, there remains a sad lack of overall structuring of the terminological data. In some cases, dictionsries have been transferred directly onto computer, in other cases, data base management considerations have overriden any attempt at systematic terminological representation of the data. Some term banks have made provision for expressing~ relations between terms (AFNOR, DANT~I) but these relations are not as yet exploited to their full.Zhese tools, whether on-line or published from nr~gnetic tape, represent gross groupings of terms (via descriptors ) for the purpose of indexing and retrieval of documents. A hierarchical structure is apparent in a thesaurus, with general relationships beh~g established between descriptors, such as BT (Broad Term), NT (Narrow Term) and RT (Related Term). Some thesauri further distinguish e.g. BqG (Broad Term Generic), ~TP (Narrow Term Partitive) ~nd so on.However, by its very nature and purpose, a DT is merely a tool for selecting :rod differentiating between the chosen items of the ~rtificial reference system of an indexing language. The existence of overlapping and even parallel indexing languages attests the inadequacy of Errs for representing generally accepted terminological relationships. Other problems associated with DTs are highlighted when attempts are made to merge DTs and to match descriptors across language boundaries. Existing DTs also find great difficulty in representing polyhierarchies (Wall, 1980) hence the ambiguous nature of the RT relation. The best known attempt at solving such difficulties is the ~ESAUROFACET (Aitchison, 1970) • D. Terminological Thesauri (Trs)Traditionally, the Tr (as advocated by e.g. WGster, 1971) represents relationships between concepts rather than descriptors in as much detail as possible. As such, it has mainly been the preserve of terminologists.The Tr has the advantage of precisely situating a term in the conceptual environment, through msk_Ing appeal to relationships such as generic and partitive (and their various detailed subdivisions ), and to relations of synonymy (quasi-, full synonyms, etc) and antonyrm/. A classic example of the Tr approach to structuring data is the Dictionary of the Machine Tool (%~dster, 1968) , which has served as a basis for the present project.However, although systematic in conception and detailed in execution, this particular work displays the constraints inherent in the WGsterian approach, which is akin to that of the DT, namely reliance on the hierarchy as a structuring tool. For example, given the partial sub-tree in figure la. : PRINTF/There exists a need for a representational device which c~n capture the necessary relationships between terms in a natural and informative manner, and which is not constrained by the limitations of the printed page, or the mental capacity of the terminologist.The present project has concentrated on finding a device capable of responding to the demands of different users of terminology, and which would allow a systematic representation of terminological data. We have retained the term Terminological Thesaurus, but have given it a new meaning. The particular device we have constructed combines the advantages of the conventional TT (systematic structure, relationships) and of the traditional dictionary (definitions). This is achieved by using inter-term relationships first to construct a highly complex network of terms, and subsequently, at the retrieval stage, to generate natural langu~e defining sentences which relate the retrieved term to others in its terminological field. This is done by means of templates, such that the user is presented with an outline definition of a term (or several definitions, if a term contracts relations with more than (me term) which will help him to circumscribe the meaning of the term precisely. Although the particular orientation of the project is to generate definitions, the semantic network that is constructed could be used for other ends, and future work will investigate these possibilities. We stress here that the definitions that are produced are not distinct texts stored in the machine and associated with individual terms; rather, the declared relationships between terms are used to dynamically build up a definition, and terms from the immediate conceptual environment are slotted into natural language defining templates. These definitions have the advantages of being precise, system internal and alws~vs correct, providing the correct relationships have been sqtered. Preliminary work in this area was first carried out at L%ffST in the late 70s, when the feasibility of using terminological relationships to structure data was shown, and an experimental syst~: was implemented, based on a hierarchical repre~entation, that output simple definitions (Harm, 1978) . This was found to be inadequate, for the reasons outlined above, hence the adoption in the present project of a richer data structure.The data base for the system is then a senantic network. ~s with most semantic networks, the most one can really say about it is that it consists of nodes and arcs: terms form the nodes, and relations between terms the arcs. In actual fact, the data base consists of several files, with the character strings of ter~ns being assi~ed to one file, such that all search and creation operations for the network proper eu'e carried out using simply logical pointers to bare nodes carrying the geometrical information needed to sustain the network, thus avoiding the overhead of storing variable length strings often in duplicate. A virtual memory has been implemented such that file accesses are kept to a minimum, and all pointer chains are followed in fast core. The basic data structure of the network is the ring, and the appearance of the network is that of a multiway extended tree st~cture. Facilities exist for on-line interactive creation and search of the network. An important design principle is that the computer should relieve the terminologist (or indeed the naive user) of the burden of keeping track of the spread and growth of a conceptual structure.We have already seen how the hierarchical approach to terminology failed to account for all the facts, and forced the terminologist into misrepresenting or distorting the conceptual framework. With a network, ease and naturalness of representation is achieved, but at the cost of increased complexity for the human mind. Thus a human will quickly lose track of the ramifications of a network, even if he could represent these adequately on some two dimensional medium. Entrusting the management of the network to the computer ensures precision and consistency in a very large data base.At the input stage, in the simple case, the terminologist need give only 3 pieces of information:two terms and the relationship between them. As the system is open-ended by design, the terminologist can declare new relationships to the system as he works, i.e. it is not necessary to firstly elaborate a set of relationships.Further, neither of the two input terms need necessarily be present in the data base. If both are absent, the system will create 8 closed sub-network, which will only be linked to the .~uin network ~len other liD/as are n~e with one or both of these terms. As input proceeds, one may have the (perhaps non-consecutive) inputs <X rel A> <Y rel A> <Z rel A> where {X,Y,Z,A) are terms and <rel> a relationship. The system Will link all terms related to <A> in a ring having <A> f]~gged as the 'head' node.Thus the terminologist is not ~equired to overtly state the relationships between {X,Y,Z}. laaving the computer to establsih links among terms from an initial single input relationship ensures high recall. Note that choice of refined relationships aids hig~ precision, although too msny refinements may be detrimental ~m retrieval, in which case some automatic mec~hanism for Widening the search to include closely associated relationships would be necessary.However, this would imply that information be conveyed to the system regarding the associations between relationships, and would be a strong argument in favour of designing a set of relationships prior to the input of tei~s. At present, we have no strong views on this subject.The syst~n is open-ended to accept new relationships; it is up to the terminolo6ist how he organises his work.In the complex case, where there are perhaps several terms having the same relationship as the input term to a common 'head', or where the 'head' may have several sub-groups (q.v.) associated with it, the system interacts with the user to tell him there are several possibilites for placing a term in the network, and shows him structured groups of brother terms having the same relationship as the input term to the 'head', where his input term may fit in. It is important to realise that the user need have no knowledge of the organisation of the network.He is asked to make terminological decisions about how an input term relates to others in the immediate conceptual environment.The notion of ' sub-group' is the only one which requires explanation in terms of the theor~j behind the orgardsation of the system. This notion was introduced in an attempt to represent the fact that there may be terms that are mutually exclusive alternatives, and which attract other terms which can cooccur without restriction. A simplistic example will make this point clearer. For the sake of discussion, we assume the following parts of a radio, shown in figure 2. :RADIO VALVE TRANSISTOR AERIAL figure 2. Simplified parts of a radiowhat we wish to represent is the fact that if a radio has valves, it has no transistors, and vice versa, but whichever is tl~e case, there is always an aerial present. What has happened here, terminologically, is that there are two terms missing from the concept space, referring to the concepts ' valve radio' and ' transistor radio ' respectively. Or it my be the case that the terminologist has not as yet entered the generic subdivisions of radio. Thus there are two 'holes' here, as yet unfilled by a term. The solution adopted, is to create dun~ nodes in the network, which act as ring 'heads' for sub-groups each of which contains one of the mutually exclusive alternatives, plus any terms that are strongly bound to one or both of the alternatives, but not themselves mutually exclusive. The dunrnies refer back directly to the true head term, and x~v be converted at any time into full nodes if the tel~ninologist ' s answers to questions about his input indicates that a new term ought to occupy this position, with this particular relation to the original head term and with this particular sub-group of terms. Terms which are common to all sub-groups,, and which have a relationship to the original head term, are merely inserted in the ring dominated by the original head, and are by default interpreted as belonging to all subgroups. In our present example, this would apply to 'aerial'. Various checks are incorporated to prevent e.g. terms common to all sub-groups being bound to all these groups -that is, if one binds a term to every possible sub-group trader an original head, this would inTply that it does not in f~ct have any special binding power, or' cooccur only with terms in these sub-,groups. The resulting structure for this ac~ittedly simple example is shown in figure 3, The FLAGS field apart, all integer fields are logical pointers to other records in the network file, except for CONTHNT which points into another file containing records which give information on the actual character strings of terms. Most of the field values are self-explanatory. The FATF/~/BROTHER field has a dual value (indicated by an appropriate flag) and together with the SON field is used to build the basic ring structure.The VARIANT field is used to form another ring which links nodes representing the same tenm in relation to different 'heads', and is commonly employed to represent polyhierarchies, which as will be recalled posed a problem for DTs and Trs. Here the advantage of the CONTF2~T pointer becomes apparent, as only the geometrical networksustaining information is duplicated when a term enters into relation with more than one 'head'.Two fields remain which require more detailed explanation, r~mely the MODIFIER field and the FACET field. These were introduced to enhance the outline definitions the syst~n produced, which, although precise and consistent, were found to be r~ther uninforn~ative in certain respects. For example, to generate the definition 'A vernier is a type of scale' leaves something to be desired, when the definition in Wflster's dictionary refers to 'a small movable auxiliar F scale'. One could of course get round this by declaring a new type of scale to the system, namely 'auxiliary scale' or even 'movable auxiliary scale', if this were terminologically acceptable. We think though that to append 'small' would be stretching things rather far. However the introduction of a MODIFIF~ field allows some measure of finer description, by allowing the user to specify an adjective or adjectival phrase, which in this case, and perhaps commonly, would be relational, i.e. 'vernier' is seen as small in relation to a larger 'scale', but may be large with respect to e.g. 'microvernier'. The modifier is thus attached to the geometrical, relational node of the network, not to the content, stringbearlngnode.The FACET field takes its nsn~ from the facets well-known in the construction of DTs. A facet is here used in a similar manner to a DT facet, that is, as a classificatory tool, to give a different view of the data. A facet represents a gross grouping of terms according to some feature. Examples of facets are:BY DIRECTION BY MATERIAL BY SHAPE etc.In traditional DT work, though, a descriptor can appear only under one facet. In the present system, a term can appear under many facets. This gives extreme flexibility and allows the terminologist to draw fine and not-so-fine distinctions between groupings of terms. In most DTs, there is little attempt at structuring facets -they are used in a fairly ad-hoc manner. In the context of the present project, research is being carried out by Catherine Yarker into the nature of facets, which will shed light on how they could best be employed in the system. An interesting point to note is that what are normally called terminological relationships could justifiably be viewed as a subset of facets, the difference being that they are more commonly used, display more structure, and have undergone systematic investigation over the years.Output from the system is available in a variety of formats, depending on how much, or which type of, information the user desires. ~here now follow a few examples which show the potential of the system: Query: CAR Response: CAR is a type of VHHICLE, together with BUS, IDRRY, TRAIN and TRACTOR. Q.: PYLON R. : PYLON is a part of tWINI:b1ILL, V&NE and GFINERATOR.These show how a simple definition of a term is given, by relating it to its generic or partitive superordinate, and listing other terms having the same relationship to the superordinate as the query term. Experiments are still under waF to determine how best to use facets, and how best to formulate the definitions. It appears useful, in a definition, first to relate a term to another by a common terminological relati0nsblp (part of, type of) and then to refine the definition by bringing in facets.There is also the possibility to ask for a specific relationship, for example, if one were to ask for parts of a wheel, the display might read:MTEEL ~s composed of HUB, SPOKE, RIM, WH~L CE/~, and TYRE.The usefulness of more refined terminological relationships is shown by the following examples:KEY is a part of KEYBOARD WI~k-~.T. is a part of CAR RADIO is a part of CAR F~GIN]< is a part of CAR where the standard 'part of' relationship proves inadequate. Therefore, we introduce subdivisions of the partitive relationskip, which generate the following outputs: K~,[ is an atomic part of KEYBOARD (i.e. the latter consists wholly of the former).One or several ~a are contained in CAR RADIO is an optional part of CAR ENGI/~E is a constituent part of CAR (i.e. contains other parts, including ENGINE) CAR These few examples hopefully give some indication of the system's potential. With a complex network enriched with refined terminological relationships, modifiers and facets, we can look forward to the generation of extended, informative definitions. It n~ybe argued that problems could arise in maintaining the consistency of the network, however the interactive input procedure is designed to show the consequences of a particular choice or insertion before the input is recorded definitively in the network. Nevertheless, there comes a point when one has to rely on the user himself not to make silly decisions. Due to the extreme flexibility of the system, and the use of a network as a representational device, the terminologist is free to introduce whichever relationships he desires, and to link whichever terms he chooses. This freedom may he anathema to those who adhere to the rigorous hierarchical approach to terminology, however, used with judicious care, the system is capable of recording multiple relationships in a way denied to the proponents of the hierarchical approach, which in the end provide a basis for the generation of information that is more fully developed, and more illuminating due its richness.In the near future, an interactive editor will be implemented to help the terminologist adjust the data base, in case of error, or to monitor the changes brought about by the a change of relationship, facet, etc.It should be noted that the system is desi~qed to be multilingual, and is capable of outputting foreign language equivalents. As we have chosen to deal with rather normalised terminology, we make no claims as to the capability of the system to handle more general vocabulary, where there would be sometimes radical differences between the conceptual systems of different languages. At the moment, we work purely with one-to-one mappings across language boundaries. However, unlike the traditional term bank, which merely enumerates foreign language equivalents, this system, on the other hand, upon addressing a forei@a language equivalent in the data base allows immediate entry to a ring of foreign language synonyms, from which the entire parallel conceptual network of the foreign terminolo~ may be accessed. The possibility is then open for further definitions in the foreign language to be output, if desired.The system is completely written in 'C', a general purpose system pro~ing language, and is implemented on a Z-~O based S-IO0 microcomputer, with 64kbyte memory and a 33mbyte hard disk. When the system ~s eventually stable, a virtual memory routine written in assembly language by Sandra Waites will replace the e~tisting 'C' routine, to speed up access times. The system runs to several thousand lines of code, including utilities and basic input/output functions ('C' provides none of the latter) and is split into several chained programs, for reasons of memory space restrictions. Execution time is not therefore as fast as it could be, although the hard disk does make a substantial difference to access times. When mounted on a 16-bit microcomputer running under the Unix operating system, as is envisaged in the near future, and equipped with improved index searching routines (not a primary purpose of the project), there should be little delay in response time.For reasons of economy and experimentation, the basic network file record is limited to 16 bytes (see figure 4 above) , however, in a future version of the system, other features may be added, for example a ring head pointer in each record, to save scanning all ring records to the right of the entry point to find the head. Further, the content file record, which contains Information on character strings, could be expanded to hold the types of information found in traditional term bank records, e.g. grammatical class, context, author, date of entry, sources, etc. This would then imply that a full-blown tel~ bank could be set up, organised around a semantic network, such that the bank would be structured according to terminological criteria, not to data base n~ment criteria.V ACKNOWiZIX]FMENTS I would like to thank Sandra Waites and Catherine Yarker for their valuable contribution towards the realisaticn of this system, and ~ colleagues Rod Johnson and Professor Juan Sager for their advice during the course of the project.
null
null
null
null
Main paper: i introduction: This paper describes an on-golng project being carried cot at U~T, which is concerned with the nature, constructicn and use of specialised machine dictionaries, concentrating on one particular type, the terminological thesanrus (Sager, 1981) . ~ system described here is capable of c~ynem~ically producing outline definitions of technical terms, and it is this feature which distinguishes it from other automated dictionaries.These traditional reference tools, while often containing hi~n quality terminology collected after painstaking research, do not in the normal case afford the user an overall conceptual view of the subject field, as they exhibit a relative lack of structure. Moreover, due to the limitations of the printed page, the form%t of the entries is fixed, such that 1~sers with differing information needs are obliged to search through to them irrelevant data. The only real aid ~ich allows the user to place a term roughly in the local conceptual environment is the conve~.tional def~_nition. However, such definitions tend to be idiosyncratic, inconsistent and nonrigorous, especially if the subject field is of any great size. Contexts, while of some help, are * sponsored by the Department of Education and Science through the award of a Research Fellowship in fx~fo1~tion Science notoriously difficult to find and control, and should only be seen as supplementary to a rigorous definiticn of the term which firmly places the term in the conceptual space. Those reference tools containing definitions which are rigorous exist mainly in the form of glossaries established by standards bodies. However, standardisation of terminologies is a slow affair, and is restricted to certain key terms or fields, such that no overall conceptual structure r,~y be obtained from these glossaries.Banks offer many advantages over traditional dictionaries, and are becoming more and more common, especially among organisations which have urgent terminolo~ needs, such as the Co~ssicn of the European Communities or the electronics firm Siemens AG in W. Germar~y. National bodies likewise use term banks to control the creation and dissemination of new standaraised terms, e.g. AFNOR (France) and DIN (W. Germany). In the UK, work is going ahead, coordinated by UMIST and the British Library, to set up a British Term Bank. Other in~portant term banks exist in Denmark (DANTe) and Sweden (TE~K).However, despite this growth in the number of term b~nks and other computer based dictionaries, there remains a sad lack of overall structuring of the terminological data. In some cases, dictionsries have been transferred directly onto computer, in other cases, data base management considerations have overriden any attempt at systematic terminological representation of the data. Some term banks have made provision for expressing~ relations between terms (AFNOR, DANT~I) but these relations are not as yet exploited to their full.Zhese tools, whether on-line or published from nr~gnetic tape, represent gross groupings of terms (via descriptors ) for the purpose of indexing and retrieval of documents. A hierarchical structure is apparent in a thesaurus, with general relationships beh~g established between descriptors, such as BT (Broad Term), NT (Narrow Term) and RT (Related Term). Some thesauri further distinguish e.g. BqG (Broad Term Generic), ~TP (Narrow Term Partitive) ~nd so on.However, by its very nature and purpose, a DT is merely a tool for selecting :rod differentiating between the chosen items of the ~rtificial reference system of an indexing language. The existence of overlapping and even parallel indexing languages attests the inadequacy of Errs for representing generally accepted terminological relationships. Other problems associated with DTs are highlighted when attempts are made to merge DTs and to match descriptors across language boundaries. Existing DTs also find great difficulty in representing polyhierarchies (Wall, 1980) hence the ambiguous nature of the RT relation. The best known attempt at solving such difficulties is the ~ESAUROFACET (Aitchison, 1970) • D. Terminological Thesauri (Trs)Traditionally, the Tr (as advocated by e.g. WGster, 1971) represents relationships between concepts rather than descriptors in as much detail as possible. As such, it has mainly been the preserve of terminologists.The Tr has the advantage of precisely situating a term in the conceptual environment, through msk_Ing appeal to relationships such as generic and partitive (and their various detailed subdivisions ), and to relations of synonymy (quasi-, full synonyms, etc) and antonyrm/. A classic example of the Tr approach to structuring data is the Dictionary of the Machine Tool (%~dster, 1968) , which has served as a basis for the present project.However, although systematic in conception and detailed in execution, this particular work displays the constraints inherent in the WGsterian approach, which is akin to that of the DT, namely reliance on the hierarchy as a structuring tool. For example, given the partial sub-tree in figure la. : PRINTF/There exists a need for a representational device which c~n capture the necessary relationships between terms in a natural and informative manner, and which is not constrained by the limitations of the printed page, or the mental capacity of the terminologist.The present project has concentrated on finding a device capable of responding to the demands of different users of terminology, and which would allow a systematic representation of terminological data. We have retained the term Terminological Thesaurus, but have given it a new meaning. The particular device we have constructed combines the advantages of the conventional TT (systematic structure, relationships) and of the traditional dictionary (definitions). This is achieved by using inter-term relationships first to construct a highly complex network of terms, and subsequently, at the retrieval stage, to generate natural langu~e defining sentences which relate the retrieved term to others in its terminological field. This is done by means of templates, such that the user is presented with an outline definition of a term (or several definitions, if a term contracts relations with more than (me term) which will help him to circumscribe the meaning of the term precisely. Although the particular orientation of the project is to generate definitions, the semantic network that is constructed could be used for other ends, and future work will investigate these possibilities. We stress here that the definitions that are produced are not distinct texts stored in the machine and associated with individual terms; rather, the declared relationships between terms are used to dynamically build up a definition, and terms from the immediate conceptual environment are slotted into natural language defining templates. These definitions have the advantages of being precise, system internal and alws~vs correct, providing the correct relationships have been sqtered. Preliminary work in this area was first carried out at L%ffST in the late 70s, when the feasibility of using terminological relationships to structure data was shown, and an experimental syst~: was implemented, based on a hierarchical repre~entation, that output simple definitions (Harm, 1978) . This was found to be inadequate, for the reasons outlined above, hence the adoption in the present project of a richer data structure.The data base for the system is then a senantic network. ~s with most semantic networks, the most one can really say about it is that it consists of nodes and arcs: terms form the nodes, and relations between terms the arcs. In actual fact, the data base consists of several files, with the character strings of ter~ns being assi~ed to one file, such that all search and creation operations for the network proper eu'e carried out using simply logical pointers to bare nodes carrying the geometrical information needed to sustain the network, thus avoiding the overhead of storing variable length strings often in duplicate. A virtual memory has been implemented such that file accesses are kept to a minimum, and all pointer chains are followed in fast core. The basic data structure of the network is the ring, and the appearance of the network is that of a multiway extended tree st~cture. Facilities exist for on-line interactive creation and search of the network. An important design principle is that the computer should relieve the terminologist (or indeed the naive user) of the burden of keeping track of the spread and growth of a conceptual structure.We have already seen how the hierarchical approach to terminology failed to account for all the facts, and forced the terminologist into misrepresenting or distorting the conceptual framework. With a network, ease and naturalness of representation is achieved, but at the cost of increased complexity for the human mind. Thus a human will quickly lose track of the ramifications of a network, even if he could represent these adequately on some two dimensional medium. Entrusting the management of the network to the computer ensures precision and consistency in a very large data base.At the input stage, in the simple case, the terminologist need give only 3 pieces of information:two terms and the relationship between them. As the system is open-ended by design, the terminologist can declare new relationships to the system as he works, i.e. it is not necessary to firstly elaborate a set of relationships.Further, neither of the two input terms need necessarily be present in the data base. If both are absent, the system will create 8 closed sub-network, which will only be linked to the .~uin network ~len other liD/as are n~e with one or both of these terms. As input proceeds, one may have the (perhaps non-consecutive) inputs <X rel A> <Y rel A> <Z rel A> where {X,Y,Z,A) are terms and <rel> a relationship. The system Will link all terms related to <A> in a ring having <A> f]~gged as the 'head' node.Thus the terminologist is not ~equired to overtly state the relationships between {X,Y,Z}. laaving the computer to establsih links among terms from an initial single input relationship ensures high recall. Note that choice of refined relationships aids hig~ precision, although too msny refinements may be detrimental ~m retrieval, in which case some automatic mec~hanism for Widening the search to include closely associated relationships would be necessary.However, this would imply that information be conveyed to the system regarding the associations between relationships, and would be a strong argument in favour of designing a set of relationships prior to the input of tei~s. At present, we have no strong views on this subject.The syst~n is open-ended to accept new relationships; it is up to the terminolo6ist how he organises his work.In the complex case, where there are perhaps several terms having the same relationship as the input term to a common 'head', or where the 'head' may have several sub-groups (q.v.) associated with it, the system interacts with the user to tell him there are several possibilites for placing a term in the network, and shows him structured groups of brother terms having the same relationship as the input term to the 'head', where his input term may fit in. It is important to realise that the user need have no knowledge of the organisation of the network.He is asked to make terminological decisions about how an input term relates to others in the immediate conceptual environment.The notion of ' sub-group' is the only one which requires explanation in terms of the theor~j behind the orgardsation of the system. This notion was introduced in an attempt to represent the fact that there may be terms that are mutually exclusive alternatives, and which attract other terms which can cooccur without restriction. A simplistic example will make this point clearer. For the sake of discussion, we assume the following parts of a radio, shown in figure 2. :RADIO VALVE TRANSISTOR AERIAL figure 2. Simplified parts of a radiowhat we wish to represent is the fact that if a radio has valves, it has no transistors, and vice versa, but whichever is tl~e case, there is always an aerial present. What has happened here, terminologically, is that there are two terms missing from the concept space, referring to the concepts ' valve radio' and ' transistor radio ' respectively. Or it my be the case that the terminologist has not as yet entered the generic subdivisions of radio. Thus there are two 'holes' here, as yet unfilled by a term. The solution adopted, is to create dun~ nodes in the network, which act as ring 'heads' for sub-groups each of which contains one of the mutually exclusive alternatives, plus any terms that are strongly bound to one or both of the alternatives, but not themselves mutually exclusive. The dunrnies refer back directly to the true head term, and x~v be converted at any time into full nodes if the tel~ninologist ' s answers to questions about his input indicates that a new term ought to occupy this position, with this particular relation to the original head term and with this particular sub-group of terms. Terms which are common to all sub-groups,, and which have a relationship to the original head term, are merely inserted in the ring dominated by the original head, and are by default interpreted as belonging to all subgroups. In our present example, this would apply to 'aerial'. Various checks are incorporated to prevent e.g. terms common to all sub-groups being bound to all these groups -that is, if one binds a term to every possible sub-group trader an original head, this would inTply that it does not in f~ct have any special binding power, or' cooccur only with terms in these sub-,groups. The resulting structure for this ac~ittedly simple example is shown in figure 3, The FLAGS field apart, all integer fields are logical pointers to other records in the network file, except for CONTHNT which points into another file containing records which give information on the actual character strings of terms. Most of the field values are self-explanatory. The FATF/~/BROTHER field has a dual value (indicated by an appropriate flag) and together with the SON field is used to build the basic ring structure.The VARIANT field is used to form another ring which links nodes representing the same tenm in relation to different 'heads', and is commonly employed to represent polyhierarchies, which as will be recalled posed a problem for DTs and Trs. Here the advantage of the CONTF2~T pointer becomes apparent, as only the geometrical networksustaining information is duplicated when a term enters into relation with more than one 'head'.Two fields remain which require more detailed explanation, r~mely the MODIFIER field and the FACET field. These were introduced to enhance the outline definitions the syst~n produced, which, although precise and consistent, were found to be r~ther uninforn~ative in certain respects. For example, to generate the definition 'A vernier is a type of scale' leaves something to be desired, when the definition in Wflster's dictionary refers to 'a small movable auxiliar F scale'. One could of course get round this by declaring a new type of scale to the system, namely 'auxiliary scale' or even 'movable auxiliary scale', if this were terminologically acceptable. We think though that to append 'small' would be stretching things rather far. However the introduction of a MODIFIF~ field allows some measure of finer description, by allowing the user to specify an adjective or adjectival phrase, which in this case, and perhaps commonly, would be relational, i.e. 'vernier' is seen as small in relation to a larger 'scale', but may be large with respect to e.g. 'microvernier'. The modifier is thus attached to the geometrical, relational node of the network, not to the content, stringbearlngnode.The FACET field takes its nsn~ from the facets well-known in the construction of DTs. A facet is here used in a similar manner to a DT facet, that is, as a classificatory tool, to give a different view of the data. A facet represents a gross grouping of terms according to some feature. Examples of facets are:BY DIRECTION BY MATERIAL BY SHAPE etc.In traditional DT work, though, a descriptor can appear only under one facet. In the present system, a term can appear under many facets. This gives extreme flexibility and allows the terminologist to draw fine and not-so-fine distinctions between groupings of terms. In most DTs, there is little attempt at structuring facets -they are used in a fairly ad-hoc manner. In the context of the present project, research is being carried out by Catherine Yarker into the nature of facets, which will shed light on how they could best be employed in the system. An interesting point to note is that what are normally called terminological relationships could justifiably be viewed as a subset of facets, the difference being that they are more commonly used, display more structure, and have undergone systematic investigation over the years.Output from the system is available in a variety of formats, depending on how much, or which type of, information the user desires. ~here now follow a few examples which show the potential of the system: Query: CAR Response: CAR is a type of VHHICLE, together with BUS, IDRRY, TRAIN and TRACTOR. Q.: PYLON R. : PYLON is a part of tWINI:b1ILL, V&NE and GFINERATOR.These show how a simple definition of a term is given, by relating it to its generic or partitive superordinate, and listing other terms having the same relationship to the superordinate as the query term. Experiments are still under waF to determine how best to use facets, and how best to formulate the definitions. It appears useful, in a definition, first to relate a term to another by a common terminological relati0nsblp (part of, type of) and then to refine the definition by bringing in facets.There is also the possibility to ask for a specific relationship, for example, if one were to ask for parts of a wheel, the display might read:MTEEL ~s composed of HUB, SPOKE, RIM, WH~L CE/~, and TYRE.The usefulness of more refined terminological relationships is shown by the following examples:KEY is a part of KEYBOARD WI~k-~.T. is a part of CAR RADIO is a part of CAR F~GIN]< is a part of CAR where the standard 'part of' relationship proves inadequate. Therefore, we introduce subdivisions of the partitive relationskip, which generate the following outputs: K~,[ is an atomic part of KEYBOARD (i.e. the latter consists wholly of the former).One or several ~a are contained in CAR RADIO is an optional part of CAR ENGI/~E is a constituent part of CAR (i.e. contains other parts, including ENGINE) CAR These few examples hopefully give some indication of the system's potential. With a complex network enriched with refined terminological relationships, modifiers and facets, we can look forward to the generation of extended, informative definitions. It n~ybe argued that problems could arise in maintaining the consistency of the network, however the interactive input procedure is designed to show the consequences of a particular choice or insertion before the input is recorded definitively in the network. Nevertheless, there comes a point when one has to rely on the user himself not to make silly decisions. Due to the extreme flexibility of the system, and the use of a network as a representational device, the terminologist is free to introduce whichever relationships he desires, and to link whichever terms he chooses. This freedom may he anathema to those who adhere to the rigorous hierarchical approach to terminology, however, used with judicious care, the system is capable of recording multiple relationships in a way denied to the proponents of the hierarchical approach, which in the end provide a basis for the generation of information that is more fully developed, and more illuminating due its richness.In the near future, an interactive editor will be implemented to help the terminologist adjust the data base, in case of error, or to monitor the changes brought about by the a change of relationship, facet, etc.It should be noted that the system is desi~qed to be multilingual, and is capable of outputting foreign language equivalents. As we have chosen to deal with rather normalised terminology, we make no claims as to the capability of the system to handle more general vocabulary, where there would be sometimes radical differences between the conceptual systems of different languages. At the moment, we work purely with one-to-one mappings across language boundaries. However, unlike the traditional term bank, which merely enumerates foreign language equivalents, this system, on the other hand, upon addressing a forei@a language equivalent in the data base allows immediate entry to a ring of foreign language synonyms, from which the entire parallel conceptual network of the foreign terminolo~ may be accessed. The possibility is then open for further definitions in the foreign language to be output, if desired.The system is completely written in 'C', a general purpose system pro~ing language, and is implemented on a Z-~O based S-IO0 microcomputer, with 64kbyte memory and a 33mbyte hard disk. When the system ~s eventually stable, a virtual memory routine written in assembly language by Sandra Waites will replace the e~tisting 'C' routine, to speed up access times. The system runs to several thousand lines of code, including utilities and basic input/output functions ('C' provides none of the latter) and is split into several chained programs, for reasons of memory space restrictions. Execution time is not therefore as fast as it could be, although the hard disk does make a substantial difference to access times. When mounted on a 16-bit microcomputer running under the Unix operating system, as is envisaged in the near future, and equipped with improved index searching routines (not a primary purpose of the project), there should be little delay in response time.For reasons of economy and experimentation, the basic network file record is limited to 16 bytes (see figure 4 above) , however, in a future version of the system, other features may be added, for example a ring head pointer in each record, to save scanning all ring records to the right of the entry point to find the head. Further, the content file record, which contains Information on character strings, could be expanded to hold the types of information found in traditional term bank records, e.g. grammatical class, context, author, date of entry, sources, etc. This would then imply that a full-blown tel~ bank could be set up, organised around a semantic network, such that the bank would be structured according to terminological criteria, not to data base n~ment criteria.V ACKNOWiZIX]FMENTS I would like to thank Sandra Waites and Catherine Yarker for their valuable contribution towards the realisaticn of this system, and ~ colleagues Rod Johnson and Professor Juan Sager for their advice during the course of the project. Appendix:
null
null
null
null
{ "paperhash": [ "aitchison|the_thesaurofacet:_a_multipurpose_retrieval_language_tool" ], "title": [ "THE THESAUROFACET: A MULTIPURPOSE RETRIEVAL LANGUAGE TOOL" ], "abstract": [ "A description is given of the English Electric ‘Thesaurofacet’, a faceted classification and thesaurus covering engineering and related scientific, technical, and management subjects. A novel feature of the system is the integration of the classification schedules and thesaurus. Each term appears both in the thesaurus and in the schedules. In the schedules the term is displayed in the most appropriate facet and hierarchy: the thesaurus supplements this information by indicating alternative hierarchies and other relationships which cut across the classified arrangement. The thesaurus also controls word forms and synonyms and acts as the alphabetical index to the class numbers. The resulting tool is multipurpose, as easily applicable to shelf arrangement and conventional classified card catalogues as to co‐ordinate indexing and computerized retrieval systems. The reasons are given for modifying certain traditional facet techniques, including the choice of traditional disciplines for main classes, the lack of a ‘built‐in’ preferred order, and the me, in certain instances, of enumeration rather than synthesis to express multi‐term concepts. Methods of application of the Thesaurofacet in pre‐coordinate and post‐coordinate systems are discussed and a brief account is given of the techniques employed in its compilation." ], "authors": [ { "name": [ "J. Aitchison" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "62110481" ], "intents": [ [] ], "isInfluential": [ false ] }
null
497
0.002012
null
null
null
null
null
null
null
null
08a23544e1147be74a79b1dbe00f107689c36849
17277795
null
Learning Translation Skills With a Knowledge-Based Tutor: {F}rench-{I}talian Conjunctions in Context
This pape~ describes an "intelligent" tutor of foreign language concepts and skills based upon state-of-the-art research in Intelligent reaching Systems and Computational Linguistics. The tutor is part of a large R&D project in ITS which resulted in a system (called DART~ for the design and development of intelligent teaching dialogues on PLATO I and in a program (called ELISA~ for teaching foreign language conjunctions in context. ELISA was able to teach a few conjunctions in English, Dutch and Italian. The research reported here extends ELISA to a complete set of conjunctions in Italian and French.
{ "name": [ "Cerri, Stefano A." ], "affiliation": [ null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
8
3
null
In the framework of a large research and development project -called DART -concerned with the construction of an environment for the design of large scale Intelligent Teaching Systems (ITS~, a prototype ITS -called ELISA -was developed which teaches words (conjunctions~ of a foreign language in context (Cerri & Breuker, 1980 Breuker & Cerri, 1982~. The DART system is an authoring environment based on the formalism of ATNs for the representation of the procedural part of the teaching dialogue and on Semantic Networks for the representation of the conceptual and linguistic structures.The main achievement of DART was the integration of traditional Computer Assisted Learning (CAL~ facilities -such as the ones available in the PLATO systemin an Artificial Intelligence framework, thus offer-........The DART system on PLATO is the result of a joint effort of the University of Pisa (I~ and the University of Amsterdam (NL~ and its property rights are reserved. It can be distributed for experimentation and research.This work was ;artially financed by a grant of the GRIS group of the Italian National Research Council.ing authors a friendly environment for a smooth CAL -ITS transition when they design and develop teaching programs.ELISA was a testbed of the ideas underlying the DART project and at the same time a simple, but operational, "intelligent" foreign language teacher acting on a small subset of English, Dutch and Italian conjunctions.The sample dialogues of ELISA were chosen intentionally to exemplify, in the clearest way, issues such as the diagnostic of misconceptions in the use of foreign language conjunctions, which were addressed by the research.In particular, the assumption was made that a very simple representation of the correct knowledge needed for using f.l. conjunctions in context would have been sufficient to model the whole subject matter as well as the incorrect behaviour of the student.Owing to its prototypical and experimental character, ELISA was not ready for concrete, large scale experimentation on any pair of the languages mentioned.The research described in this report has been carried out with the concrete goal of making ELISA a realistic "intelligent" automatic foreign language teacher.In fact, we wanted to verify whether the simple representation of the knowledge in a semantic network was sufficient to represent a complete set of transformations from the first into the second language and vice versa.Italian and French were chosen. A complete contrastive representation of the use of conjunctions in meaningful contexts was produced.The set of these unambiguous, meaningful contexts -about 600 -defines the use of the conjunctions -about 40 for each language.Their correct use can be classified according to 60 distinguishing "concepts" which provide for all potential trans~lations.The classification was done on an empirical ground and is not based on any linguistic rule or theory.This was actually a contrastive bottom-up analysis of the use of conjunctions in Italian andThe specific choice of the teaching material highlighted many (psyeho~linguistic and computational problems related to the compatibility between the design constraints of ELISA on the one hand and the subtleties of the full use of natural language fragments in translations on the other. In particular, the complexity of the full network of conjunctions, concepts and contexts in the two languages suggests a large set of possible misconceptions to be discovered from the (partially> incorrect behaviour of the students but only the subset of plausible ones should guide the diagnostic dialogue.In the following, we briefly present the teaching strategy of ELISA and some examples of dialogue in order to introduce the problems referred above and the solutions we propose.The full set of data is available in Merger & Cerri (19837 and a subset of it as well as a more extended description of this work can be found in Cerri & Merger (1982~. A detailed description of DART and ELISA is a work in preparation.Notice that for the development of this knowledge base no other expertise was required than that of a professional teacher, once the principles are provided by AI experts. This is a proof of the potential power of AI representations in educational settings and in projects of natural language translation.Practically, our program is one of the few Intelligent Systems available in the field of Foreign Language Teaching and usable on a large scale for Computer Assisted Learning.A. The Purpose of ELISA ELISA teaches a student to disambiguate conjunctions in a foreign language by means of a dialogue. The purpose of ELISA's dialogue is to build a representation of the student's behaviour which coincides with the correct representation of the knowledge needed to translate words in a foreign language in context.ELISA has a student model, which is updated each time the student answers a question. According to the classification of the answer, and the phase of the dialogue, ELISA selects one or more new questions to be put to the student in order to achieve its purpose.The mother and the foreign language can be associated to the source and the target language (s.l. and t.l.~ respectively, or vice versa: the system is symmetric.The main phases of ELISA are Presentation and Assessment.The presentation phase is traditional.The teacher constructs an exhaustive set of Question Types from the subject matter represented in a knowledge network containing conjunctions and contexts in two languages as well as concepts adequately linked to conjunctions and contexts (see for instance Figs.l and 2~. These are pairs: conjunction in the source language/conceptual meaning. For each conjunction in the s.l. and each concept possibly associated to it a question type is generated.For each question type, a classification of the conjunctions in the target language may be constructed. This classification is a partition of the t.l. conjunctions into three classes, namely expected right, expected wrong and unexpected wrong. The Expected Right conjunctions are all t. i. conjunctions which can be associated to the conceptual meaning of the question type. The Expected Wrong conjunctions are all t.l. conjunctions which can be a correct translation of the s.l. conjunction of the question type, but in a ~onceptual meaning different from that of the question type considered. The remaining conjunctions in the t.l. are classified as Unexpected Wrong: they do not have any relation in the knowledge base with the s.l. conjunction, nor with the concept in the question type considered.Notice that "concepts" are defined pragmatically i.e. in terms of the purpose of the representation which is to teach students to translate correctly conjunctions in context. This defintion of concepts is not based on any (psycho~linguistic theory or phenomenon. In fact, we looked for contexts which have a one-to-one correspondence with concepts, so that for each context all the conjunctions associated to its specific conceptual meaning can be valid completions of the sentence, in both languages.The question is generated from the question type by selecting (randomly~ a context linked to the concept of the question type, and inserting the conjunction of the question type. One of the (equi-valent~ translations of the context into target language is also presented to the student. The student is required to insert the conjunction in the target language which correctly completes the sentence.When the student makes an error, the correction consists simply in informing him/her of the correct answer(s~. This feedback strategy should have the effect of teaching the student the correct associations and is similar to that used in most CAL programs.In contrast to most CAL programs, in ELISA questions are generated at execution time from information stored in the knowledge network, The classification of answers is computed dynamically from the knowledge network, it is not a simple local pattern matching procedure.The purpose of the assessment phase is to verify the acquisition of knowledge and skills on the part of the student during the presentation phase. It includes the diagnosis and remedy of misconceptions.Questions are generated as in the presentation phase, but in case of a consistent incorrect answer a bug (see for instance Brown & van Lehn, 19801 , -a complete dialogue with the student is performed in order to test the hypothesis that the bug arises from a whole set of errors grouped into one or more misconceptions.The procedure operates briefly as follows: each bug invokes a. one concept called Source Misconcept which represents the meaning of the context of the question put to the student (e.g., conditional, temporal, etc.1, and b. one or more concepts called Target Misconcepts which represent the possible meanings of the conjunction used by the student in the answer. The set of target misconcepts does not include the source misconcept by definition of the bug.For each pair of source/target misconcept, question types are generated and the questions are in turn put to the student. The selection of adequate question types is done on the basis of the Possible misconception(sl; a more skilled selection should include constraints ahout the Plausible (expectedl misconceptions, instead of considering exhaustively all the theoretical combinations. This is a maSn issue of further empirical research, as will be remarked later.During each of these diagnostic dialogues, it is possible that new bugs, i.e. bugs not related to the source and target misconcept, are discovered. When this is the case, these bugs are s=ored in a bug stack. Once the original misconception has been diagnosed and remedied, each bug in the bug stack triggers (recursivelyl the same diagnostic procedure.Again, a more skilled stra=egy for the ordering of bugs to be diagnosed and remedied could be easily designed, on the basis of empirical evidence drawn by experiments on studentfs behaviour.Finally, let us discuss in more detail the evaluation of the student model as it was built according to a diagnostic dialogue. By "student model", we mean the set of "misconception matrixes" each related to the source and a target misconcept, and related to two or more conjunctions.As these matrixes may, in principle, present a large variety of different patterns, and even allow for variations in their dimensions, it would be a rather complex task to design a minimal set of typical erroneous patterns unless some reduction procedure is applied.So, we first compress the misconception matrixes into "confusion kernels" which are (2x3~ matrixes, then we compare the kernels with standard patterns of stereotypical misconceptions. Once the match is found, the diagnostic phase is considered ended, and a remedy phase is begun.The remedy consists in informing the student of the "nature of the misconception", i.e. the interpretation of the confusion kernel. This interpretation is possible by applying some (psychollinguistic criteria. In the following section, some of these Criteria will be outlined in order to explain the behaviour of ELISA in the examples of dialogue presented.In other words, the remedy is not a paraphrase of the history of the dialogue during the diagnosis, but an interpretation of the significant aspects of that dialogue. Although the ELISA project is to be considered completed, research is currently carried out in order to design a cognitively grounded theory of misconceptions occurring in this translation task. For some preliminary work, see Breuker & Cerri (1982~. It should be noticed that this is the most delicate aspect of this investigation. When ELISA was in a preliminary phase, and its dialogues were realistic but limited to a "toy" knowledge about the discriminative use of a few conjunctions, we did not expect that its extension to "real" knowledge would have implied such an explosion of possible right (and wrong~ links in the network, thus implying an explosion of possible models of student's behaviour. Now, the reduction of the number and complexity of these possible models requires undoubtedly empirical evidence. Currently, ELISA embodies enough intuitions to be considered a mature experimental tool, but not a complete theory of behaviour in translation, which will only be possible after many refinements of the simple theory ~ embodied by ELISA according to the experimental evidence in real educational settings.After a misconception has been remedied, the (newl bug stack is examined and each bug triggers a diagnostic-remedial procedure, possibly suggesting new bugs and so recursively.When a (new~ bug stack is empty, ELISA checks if all pairs of source/target misconcept have been examined, if it was not the case a diagnostic procedure is called, else the (original~ bug is considered remedied and ELISA formulates once more the question which received initially the wrong answer. We expect that now the student will not fail.In this section we will present some examples of dialogue which may well represent atypical interaction occurring as diagnosis and remedy of a student's misconceptions.The dialogue in Fig. 1 presents a prototype for a class of misconceptions which may be classified as "conceptual inversion", i.e. the model of the student represents the fact the (s~he distinguishes between the source and target misconcept, but associates each of the two with a conjunction specific for the other of the two. Fig, 1 Example of a dialogue concerning a "Conceptual Inversion" type of misconception. An excerpt of the knowledge network of ELISA concerning the (I12~ and (CR~ concepts is also presented.In this example, the first question of ELISA: E1 has the type (perch@, (ll2~2)and the expected right answer is "pourquoi".(I12~ means: 'Indirect Interrogation, 2nd type'.Usually, students know that "pourquoi" is correct in interrogative clauses, but sometimes they do not know that an interrogative clause might be indirect, as is our case. Therefore, the translation "pourquoi" is discarded, and the alternative "parce que" preferred. This conjunction is ind~ed a correct translation of "perch,", but in (CR~ J contexts. This bug is classified as "expected wrong" and the diagnostic strategy is entered.The question E2 of ELISA checks if the student knows that the translation of "perch," in (CR~ contexts is "parce que". If this is the case, it could be guessed that the student does not know (the use of~ "pourquoi", or alternatively knows (the use of~ pourquoi but believes "pourquoi" to be correct in a meaning different from (112) or (CR), and translates "perch," with "parce que" irrespective of the context. This misconception will be described in more detail in the next subsection.Instead, the student answers: "pourquoi" which allows one to draw the following conclusions: a. the student distinguishes between (112) and (CR) contexts, but b. (s)he binds (112) with "parce que" and (CR) with "pourquoi", which is the reverse of the correct knowledge about French conjunctions.We call this misconception Conceptual Inversion, the remedy of ELISA will explain to the student this result and give more examples of the use of these conjunctions as translations of "perch," in each of the two conceptual meanings.The second example refers to the dialogue presented in F~g. 2. The question type of E1 is: (come, (SI) N and the expected right response of the student is either "aussitSt que" or d~s que".;I: Come me vide, mi fece un segno con la mano.(As (s)he saw me, (s)he waved to me.) ... il me vit, il me fit un signe de la main. An excerpt from the knowledge network related to the dialogue is also included.The French "co~e",which is interfering with the Italian "come", is not bound in any way to the concept (SI), but instead can be use d correctly as a translation of "come" in (CP) 5 contexts.This interference can be at the origin of the misconception consisting of the conviction that, although (SI) and (CP)contexts are clearly distinguishable in Italian, also because there is a specific Italian conjunction "(non) appena" for (SI), which was not true for the disambiguation of (112) and (CR) in the example of Fig. I , the Italian student consistently translates "come" with "con~ne" irrespective of the co~text.The answer to E1 of type (come, (Sl))is SI: "comme" which is expected wrong. ELISA puts a question E2 of type (non appena, (SI)) which is correctly answered by S2:"d~s que". Finally, ELISA puts a question E3 of type (come, (CP)) and gets as answer "comme" which is again correct.It can be concluded that: a. it is possible, but not certain, that the student distinguishes between (SI) and (CP) contexts.Since "non appena" and "d~s que" are both unambiguously bound to (SI), the answer S2 does not show that the student recognizes the context (SI); (s)he might instead associate directly the conjunction "non appena" with "d~s que" without being aware of the conceptual meaning of the context; b. the last hypothesis has to be considered confirmed by the behaviour of the student shown by SI and $3: (s)he binds "come" to "comme" irrespective of the contexts~ probably because of the interference between the two conjunctions. We call this misconception Direct Translation.ELISA was a testbed for Intelligent Teaching(CP) means: 'Comparative Process'.Systems in foreign language teaching, designed and developed in DART on the PLATO system for large scale use. Its paradigm can be utilized for teaching to translate any word or structure whose meaning depends on the context.The full knowledge of ELISA concerning Italian and French conjunctions has been produced and an analysis has been made of the possible patterns of wrong behaviour. This analysis has led to the design of a strategy for the diagnosis of misconceptions underlying the surface mistakes, which has been (theoretically) tested in simple cases.Because the real correct knowledge is extremely complex, and so the possible incorrect one, we expect to introduce heuristics into our exhaustive diagn0stic strategy once it will be used in an experimental educational setting.In particular, three aspects could be the object of empirical research on the protocols of interaction with ELISA, nl: a. the plausibility of the expected misconceptions, their frequency and the explanations -given by the students -of the causes of their wrong behaviour; b. the heuristics to be inserted in ELISA in order to induce the misconception from the diagnostic dialogue, e.g. taking the history of the whole teaching dialogue into account; c. the remedial procedure to be applied once the misconception has been classified (e.g. a "socratic" method).Theoretically, ELISA's Italian-French knowledge network is a contrastive representation of the use of conjunctions and can be utilized in teaching independently on the computer program.A representation of the syntax and the semantics of the contexts for their automatic production would certainly be the natural extension of ELISA's research within a project of automatic translation, and for a better understanding and explanation of the student's misconceptions as well.Because the "a posteriori" linguistic definition of the "concepts" in the knowledge network can be considered an interlingua for the translation of conjunctionS, one could conceive that an extension of the network of ELISA to more languages, constructed pragmatically from the contexts, although requiring a reorganization of the conceptual structure of the network, could be o~ some interest for any project of multilingual automatic translation.
null
null
null
null
Main paper: introduction: In the framework of a large research and development project -called DART -concerned with the construction of an environment for the design of large scale Intelligent Teaching Systems (ITS~, a prototype ITS -called ELISA -was developed which teaches words (conjunctions~ of a foreign language in context (Cerri & Breuker, 1980 Breuker & Cerri, 1982~. The DART system is an authoring environment based on the formalism of ATNs for the representation of the procedural part of the teaching dialogue and on Semantic Networks for the representation of the conceptual and linguistic structures.The main achievement of DART was the integration of traditional Computer Assisted Learning (CAL~ facilities -such as the ones available in the PLATO systemin an Artificial Intelligence framework, thus offer-........The DART system on PLATO is the result of a joint effort of the University of Pisa (I~ and the University of Amsterdam (NL~ and its property rights are reserved. It can be distributed for experimentation and research.This work was ;artially financed by a grant of the GRIS group of the Italian National Research Council.ing authors a friendly environment for a smooth CAL -ITS transition when they design and develop teaching programs.ELISA was a testbed of the ideas underlying the DART project and at the same time a simple, but operational, "intelligent" foreign language teacher acting on a small subset of English, Dutch and Italian conjunctions.The sample dialogues of ELISA were chosen intentionally to exemplify, in the clearest way, issues such as the diagnostic of misconceptions in the use of foreign language conjunctions, which were addressed by the research.In particular, the assumption was made that a very simple representation of the correct knowledge needed for using f.l. conjunctions in context would have been sufficient to model the whole subject matter as well as the incorrect behaviour of the student.Owing to its prototypical and experimental character, ELISA was not ready for concrete, large scale experimentation on any pair of the languages mentioned.The research described in this report has been carried out with the concrete goal of making ELISA a realistic "intelligent" automatic foreign language teacher.In fact, we wanted to verify whether the simple representation of the knowledge in a semantic network was sufficient to represent a complete set of transformations from the first into the second language and vice versa.Italian and French were chosen. A complete contrastive representation of the use of conjunctions in meaningful contexts was produced.The set of these unambiguous, meaningful contexts -about 600 -defines the use of the conjunctions -about 40 for each language.Their correct use can be classified according to 60 distinguishing "concepts" which provide for all potential trans~lations.The classification was done on an empirical ground and is not based on any linguistic rule or theory.This was actually a contrastive bottom-up analysis of the use of conjunctions in Italian andThe specific choice of the teaching material highlighted many (psyeho~linguistic and computational problems related to the compatibility between the design constraints of ELISA on the one hand and the subtleties of the full use of natural language fragments in translations on the other. In particular, the complexity of the full network of conjunctions, concepts and contexts in the two languages suggests a large set of possible misconceptions to be discovered from the (partially> incorrect behaviour of the students but only the subset of plausible ones should guide the diagnostic dialogue.In the following, we briefly present the teaching strategy of ELISA and some examples of dialogue in order to introduce the problems referred above and the solutions we propose.The full set of data is available in Merger & Cerri (19837 and a subset of it as well as a more extended description of this work can be found in Cerri & Merger (1982~. A detailed description of DART and ELISA is a work in preparation.Notice that for the development of this knowledge base no other expertise was required than that of a professional teacher, once the principles are provided by AI experts. This is a proof of the potential power of AI representations in educational settings and in projects of natural language translation.Practically, our program is one of the few Intelligent Systems available in the field of Foreign Language Teaching and usable on a large scale for Computer Assisted Learning.A. The Purpose of ELISA ELISA teaches a student to disambiguate conjunctions in a foreign language by means of a dialogue. The purpose of ELISA's dialogue is to build a representation of the student's behaviour which coincides with the correct representation of the knowledge needed to translate words in a foreign language in context.ELISA has a student model, which is updated each time the student answers a question. According to the classification of the answer, and the phase of the dialogue, ELISA selects one or more new questions to be put to the student in order to achieve its purpose.The mother and the foreign language can be associated to the source and the target language (s.l. and t.l.~ respectively, or vice versa: the system is symmetric.The main phases of ELISA are Presentation and Assessment.The presentation phase is traditional.The teacher constructs an exhaustive set of Question Types from the subject matter represented in a knowledge network containing conjunctions and contexts in two languages as well as concepts adequately linked to conjunctions and contexts (see for instance Figs.l and 2~. These are pairs: conjunction in the source language/conceptual meaning. For each conjunction in the s.l. and each concept possibly associated to it a question type is generated.For each question type, a classification of the conjunctions in the target language may be constructed. This classification is a partition of the t.l. conjunctions into three classes, namely expected right, expected wrong and unexpected wrong. The Expected Right conjunctions are all t. i. conjunctions which can be associated to the conceptual meaning of the question type. The Expected Wrong conjunctions are all t.l. conjunctions which can be a correct translation of the s.l. conjunction of the question type, but in a ~onceptual meaning different from that of the question type considered. The remaining conjunctions in the t.l. are classified as Unexpected Wrong: they do not have any relation in the knowledge base with the s.l. conjunction, nor with the concept in the question type considered.Notice that "concepts" are defined pragmatically i.e. in terms of the purpose of the representation which is to teach students to translate correctly conjunctions in context. This defintion of concepts is not based on any (psycho~linguistic theory or phenomenon. In fact, we looked for contexts which have a one-to-one correspondence with concepts, so that for each context all the conjunctions associated to its specific conceptual meaning can be valid completions of the sentence, in both languages.The question is generated from the question type by selecting (randomly~ a context linked to the concept of the question type, and inserting the conjunction of the question type. One of the (equi-valent~ translations of the context into target language is also presented to the student. The student is required to insert the conjunction in the target language which correctly completes the sentence.When the student makes an error, the correction consists simply in informing him/her of the correct answer(s~. This feedback strategy should have the effect of teaching the student the correct associations and is similar to that used in most CAL programs.In contrast to most CAL programs, in ELISA questions are generated at execution time from information stored in the knowledge network, The classification of answers is computed dynamically from the knowledge network, it is not a simple local pattern matching procedure.The purpose of the assessment phase is to verify the acquisition of knowledge and skills on the part of the student during the presentation phase. It includes the diagnosis and remedy of misconceptions.Questions are generated as in the presentation phase, but in case of a consistent incorrect answer a bug (see for instance Brown & van Lehn, 19801 , -a complete dialogue with the student is performed in order to test the hypothesis that the bug arises from a whole set of errors grouped into one or more misconceptions.The procedure operates briefly as follows: each bug invokes a. one concept called Source Misconcept which represents the meaning of the context of the question put to the student (e.g., conditional, temporal, etc.1, and b. one or more concepts called Target Misconcepts which represent the possible meanings of the conjunction used by the student in the answer. The set of target misconcepts does not include the source misconcept by definition of the bug.For each pair of source/target misconcept, question types are generated and the questions are in turn put to the student. The selection of adequate question types is done on the basis of the Possible misconception(sl; a more skilled selection should include constraints ahout the Plausible (expectedl misconceptions, instead of considering exhaustively all the theoretical combinations. This is a maSn issue of further empirical research, as will be remarked later.During each of these diagnostic dialogues, it is possible that new bugs, i.e. bugs not related to the source and target misconcept, are discovered. When this is the case, these bugs are s=ored in a bug stack. Once the original misconception has been diagnosed and remedied, each bug in the bug stack triggers (recursivelyl the same diagnostic procedure.Again, a more skilled stra=egy for the ordering of bugs to be diagnosed and remedied could be easily designed, on the basis of empirical evidence drawn by experiments on studentfs behaviour.Finally, let us discuss in more detail the evaluation of the student model as it was built according to a diagnostic dialogue. By "student model", we mean the set of "misconception matrixes" each related to the source and a target misconcept, and related to two or more conjunctions.As these matrixes may, in principle, present a large variety of different patterns, and even allow for variations in their dimensions, it would be a rather complex task to design a minimal set of typical erroneous patterns unless some reduction procedure is applied.So, we first compress the misconception matrixes into "confusion kernels" which are (2x3~ matrixes, then we compare the kernels with standard patterns of stereotypical misconceptions. Once the match is found, the diagnostic phase is considered ended, and a remedy phase is begun.The remedy consists in informing the student of the "nature of the misconception", i.e. the interpretation of the confusion kernel. This interpretation is possible by applying some (psychollinguistic criteria. In the following section, some of these Criteria will be outlined in order to explain the behaviour of ELISA in the examples of dialogue presented.In other words, the remedy is not a paraphrase of the history of the dialogue during the diagnosis, but an interpretation of the significant aspects of that dialogue. Although the ELISA project is to be considered completed, research is currently carried out in order to design a cognitively grounded theory of misconceptions occurring in this translation task. For some preliminary work, see Breuker & Cerri (1982~. It should be noticed that this is the most delicate aspect of this investigation. When ELISA was in a preliminary phase, and its dialogues were realistic but limited to a "toy" knowledge about the discriminative use of a few conjunctions, we did not expect that its extension to "real" knowledge would have implied such an explosion of possible right (and wrong~ links in the network, thus implying an explosion of possible models of student's behaviour. Now, the reduction of the number and complexity of these possible models requires undoubtedly empirical evidence. Currently, ELISA embodies enough intuitions to be considered a mature experimental tool, but not a complete theory of behaviour in translation, which will only be possible after many refinements of the simple theory ~ embodied by ELISA according to the experimental evidence in real educational settings.After a misconception has been remedied, the (newl bug stack is examined and each bug triggers a diagnostic-remedial procedure, possibly suggesting new bugs and so recursively.When a (new~ bug stack is empty, ELISA checks if all pairs of source/target misconcept have been examined, if it was not the case a diagnostic procedure is called, else the (original~ bug is considered remedied and ELISA formulates once more the question which received initially the wrong answer. We expect that now the student will not fail.In this section we will present some examples of dialogue which may well represent atypical interaction occurring as diagnosis and remedy of a student's misconceptions.The dialogue in Fig. 1 presents a prototype for a class of misconceptions which may be classified as "conceptual inversion", i.e. the model of the student represents the fact the (s~he distinguishes between the source and target misconcept, but associates each of the two with a conjunction specific for the other of the two. Fig, 1 Example of a dialogue concerning a "Conceptual Inversion" type of misconception. An excerpt of the knowledge network of ELISA concerning the (I12~ and (CR~ concepts is also presented.In this example, the first question of ELISA: E1 has the type (perch@, (ll2~2)and the expected right answer is "pourquoi".(I12~ means: 'Indirect Interrogation, 2nd type'.Usually, students know that "pourquoi" is correct in interrogative clauses, but sometimes they do not know that an interrogative clause might be indirect, as is our case. Therefore, the translation "pourquoi" is discarded, and the alternative "parce que" preferred. This conjunction is ind~ed a correct translation of "perch,", but in (CR~ J contexts. This bug is classified as "expected wrong" and the diagnostic strategy is entered.The question E2 of ELISA checks if the student knows that the translation of "perch," in (CR~ contexts is "parce que". If this is the case, it could be guessed that the student does not know (the use of~ "pourquoi", or alternatively knows (the use of~ pourquoi but believes "pourquoi" to be correct in a meaning different from (112) or (CR), and translates "perch," with "parce que" irrespective of the context. This misconception will be described in more detail in the next subsection.Instead, the student answers: "pourquoi" which allows one to draw the following conclusions: a. the student distinguishes between (112) and (CR) contexts, but b. (s)he binds (112) with "parce que" and (CR) with "pourquoi", which is the reverse of the correct knowledge about French conjunctions.We call this misconception Conceptual Inversion, the remedy of ELISA will explain to the student this result and give more examples of the use of these conjunctions as translations of "perch," in each of the two conceptual meanings.The second example refers to the dialogue presented in F~g. 2. The question type of E1 is: (come, (SI) N and the expected right response of the student is either "aussitSt que" or d~s que".;I: Come me vide, mi fece un segno con la mano.(As (s)he saw me, (s)he waved to me.) ... il me vit, il me fit un signe de la main. An excerpt from the knowledge network related to the dialogue is also included.The French "co~e",which is interfering with the Italian "come", is not bound in any way to the concept (SI), but instead can be use d correctly as a translation of "come" in (CP) 5 contexts.This interference can be at the origin of the misconception consisting of the conviction that, although (SI) and (CP)contexts are clearly distinguishable in Italian, also because there is a specific Italian conjunction "(non) appena" for (SI), which was not true for the disambiguation of (112) and (CR) in the example of Fig. I , the Italian student consistently translates "come" with "con~ne" irrespective of the co~text.The answer to E1 of type (come, (Sl))is SI: "comme" which is expected wrong. ELISA puts a question E2 of type (non appena, (SI)) which is correctly answered by S2:"d~s que". Finally, ELISA puts a question E3 of type (come, (CP)) and gets as answer "comme" which is again correct.It can be concluded that: a. it is possible, but not certain, that the student distinguishes between (SI) and (CP) contexts.Since "non appena" and "d~s que" are both unambiguously bound to (SI), the answer S2 does not show that the student recognizes the context (SI); (s)he might instead associate directly the conjunction "non appena" with "d~s que" without being aware of the conceptual meaning of the context; b. the last hypothesis has to be considered confirmed by the behaviour of the student shown by SI and $3: (s)he binds "come" to "comme" irrespective of the contexts~ probably because of the interference between the two conjunctions. We call this misconception Direct Translation.ELISA was a testbed for Intelligent Teaching(CP) means: 'Comparative Process'.Systems in foreign language teaching, designed and developed in DART on the PLATO system for large scale use. Its paradigm can be utilized for teaching to translate any word or structure whose meaning depends on the context.The full knowledge of ELISA concerning Italian and French conjunctions has been produced and an analysis has been made of the possible patterns of wrong behaviour. This analysis has led to the design of a strategy for the diagnosis of misconceptions underlying the surface mistakes, which has been (theoretically) tested in simple cases.Because the real correct knowledge is extremely complex, and so the possible incorrect one, we expect to introduce heuristics into our exhaustive diagn0stic strategy once it will be used in an experimental educational setting.In particular, three aspects could be the object of empirical research on the protocols of interaction with ELISA, nl: a. the plausibility of the expected misconceptions, their frequency and the explanations -given by the students -of the causes of their wrong behaviour; b. the heuristics to be inserted in ELISA in order to induce the misconception from the diagnostic dialogue, e.g. taking the history of the whole teaching dialogue into account; c. the remedial procedure to be applied once the misconception has been classified (e.g. a "socratic" method).Theoretically, ELISA's Italian-French knowledge network is a contrastive representation of the use of conjunctions and can be utilized in teaching independently on the computer program.A representation of the syntax and the semantics of the contexts for their automatic production would certainly be the natural extension of ELISA's research within a project of automatic translation, and for a better understanding and explanation of the student's misconceptions as well.Because the "a posteriori" linguistic definition of the "concepts" in the knowledge network can be considered an interlingua for the translation of conjunctionS, one could conceive that an extension of the network of ELISA to more languages, constructed pragmatically from the contexts, although requiring a reorganization of the conceptual structure of the network, could be o~ some interest for any project of multilingual automatic translation. Appendix:
null
null
null
null
{ "paperhash": [ "brown|repair_theory:_a_generative_theory_of_bugs_in_procedural_skills", "cerri|a_rather_intelligent_language_teacher." ], "title": [ "Repair Theory: A Generative Theory of Bugs in Procedural Skills", "A Rather Intelligent Language Teacher." ], "abstract": [ "This paper describes a generative theory of bugs. It claims that all bugs of a procedural skill can be derived by a highly constrained form of problem solving acting on incomplete procedures. These procedures are characterized by formal deletion operations that model incomplete learning and forgetting. The problem solver and the deletion operator have been constrained to make it impossible to derive “star-bugs”—algorithms that are so absurd that expert diagnosticians agree that the alogorithm will never be observed as a bug. Hence, the theory not only generates the observed bugs, it fails to generate star-bugs. \n \nThe theory has been tested on an extensive data base of bugs for multidigit subtraction that was collected with the aid of the diagnostic systems buggy and debuggy. In addition to predicting bug occurrence, by adoption of additional hypotheses, the theory also makes predictions about the frequency and stability of bugs, as well as the occurrence of certain latencies in processing time during testing. Arguments are given that the theory can be applied to domains other than subtraction and that it can be extended to provide a theory of procedural learning that accounts for bug acquisition. Lastly, particular care has been taken to make the theory principled so that it can not be tailored to fit any possible data.", "A semi-intelligent CAI program for teaching the use of conjunctions in a number of foreign languages is presented. The representation of well known confusions in the use of these conjunctions form the basis of tutorial strategies to correct the student. The program is written as a try-out of DART, an ATN-based system for authoring intelligent CAI lessons on the PLATO-system. \n \nEmphasis is put on the fact that intelligence in CAI is not all-or-none: the degree of intelligence required is dependent on the variances permitted and expected in the student's responses. Misconceptions are one of the major sources of variance. DART is proposed as facilitating the transition between traditional and intelligent CAI." ], "authors": [ { "name": [ "J. Brown", "K. VanLehn" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Cerri", "J. Breuker" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "970749", "60874431" ], "intents": [ [], [ "background" ] ], "isInfluential": [ false, false ] }
null
497
0.006036
null
null
null
null
null
null
null
null
6a10ac0590fdf06a85a533171662b910c474d331
32981346
null
An Island Parsing Interpreter for the Full Augmented Transition Network Formalism
Island parsing is a powerful technique for parsing with Augmented ~ansition Networks (ATNs) which was developed and successfully applied in the HWIM speech understanding project. The HWIM application grammar did not, however, exploit Woods' original full ATN specification. This paper describes an island parsing interpreter based on HWIM, but containing substantial and important extensions to enable it to interpret any grammar which conforms to that full specification of 1970. The most important contributions have been to eliminate the need for prior specification of scope clauses, to provide more power by implementing LIFTR and SENDR actions within the island parsing framework, and to improve the efficiency of the techniques used to merge together partially-built islands within the utterance. This paper also presents some observations about island parsing, based on the use of the parser described, and some suggestions for future directions for island parsing research.
{ "name": [ "Carroll, John A." ], "affiliation": [ null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
13
8
null
null
parsing is a powerful technique for parsing with Augmented ~ansition Networks (ATNs) which was developed and successfully applied in the HWIM speech understanding project. The HWIM application grammar did not, however, exploit Woods' original full ATN specification. This paper describes an island parsing interpreter based on HWIM, but containing substantial and important extensions to enable it to interpret any grammar which conforms to that full specification of 1970. The most important contributions have been to eliminate the need for prior specification of scope clauses, to provide more power by implementing LIFTR and SENDR actions within the island parsing framework, and to improve the efficiency of the techniques used to merge together partially-built islands within the utterance. This paper also presents some observations about island parsing, based on the use of the parser described, and some suggestions for future directions for island parsing research.In an ordinary ATN parser, the parsing of a sentence is performed unidirectionally (normally left-to-right); the parser traverses each arc in the directed graph of the grammar in the same direction, starting from the initial state.An island ATN parser, on the other hand, can start at any point in the transition network with a word match from anywhere in the input string, not just at the left end, and parse the rest of the string working outwards to the left and right, adding words to each end of the 'island' formed. Indeed, any number of islands can be built, the parser merging the islands together as their boundaries meet. Clearly, in speech processing, island parsing is well suited to gearing sentence processing to the most solid inputs from the acoustic anal yser.The main problems with previous implementations of island parsing for ATNs have been with scope clauses and LIFTR and SENDR actions; essentially, these problems arise because in island parsing structure determination has to work from right-to-left as well as in the more usual left-to-right direction, i.e. against the normal parsing flow.The ATN formalism provides for actions on the arcs of the network which can set and modify the contents of 'registers', and arbitrary tests on an arc to determine whether that arc is to be followed. In an island parser, an action or test is referred to as being context-sensitive when it either requires the value of a register that is set somewhere to the left, or changes the value of a register that is used somewhere also to the left. For each context sensitive action or test, there exists a set of states to its left such that the action can safely be performed if its execution is delayed until the parse has passed through one of these states. This list of states must be expressed, and in the HWIM system (Woods, 1976) , this is done when writing the grammar by using a scope clause. The form of a scope clause is (SCOPE <scope specification> <list of context-sensltive actions>)where the scope specification is the list of precursor states. This requirement for prior specification of scope clauses clearly adds to the burden of the grammar writer.I have implemented a more satisfactory treatment of scope clauses. This is described belo~ following the discussion of LIFTR and SENDR actions, which require special handling in scoping.Two important actions (indeed it is difficult to write a grammar of any substantial subset of English without them) defined by Woods (1970) , namely LIFTR and SENDR, present implementation difficulties in an island parsing interpreter. These actions were evidently excluded from the HWIM parser since there is no mention of them by Woods (1976) .The action LIFTR can occur on any arc in the network, to transmit the value of a register up to the next higher level in the network, whereas SENDR can only occur on a PUSH arc, to transmit the value of a register down to a lower level.The same mechanism can be used to implement LIFTR actions as is used to transmit the result of each lower level computation up to the next higher level as the value of the special register '*'.However, LIFTR presents problems with scope clauses in an island parsing ATN interpreter: if an action (LIFTR <register> ...) occurs in a sub-network, any action using that register in any higher sub-network that PUSHes for the one containing the LIFTR must be scoped so that the action is not performed in a right-to-left parse at least until after the PUSH has been executed'. See figure I. I PUSH// action using <register> here must be scoped to before the PUSH arc$ \POP \ ¢ (LIFTR <register> .) Figtre I. Scoping LIFTR actions.So, for example, when parsing English from right to left, tests that the verb and subject agree in person and number (if this information is carried in registers) must be postponed until the PUSH for the beginning of the subject noun phrase. Section III describes how my interpreter takes care of this scoping problem.Since in a right-to-left parse, lower level subnetworks are traversed before the PUSH to them is performed, there is no way of knowing the value of a register that is being SENDRed at least until after the PUSH. Thus all actions involving registers whose values depend on the value of that register must he saved to be executed at the higher level.I have dealt with this by putting such actions into SCOPE clauses containing a special new scope specification, which I call scope SENDR. Actions with scope SENDR are never executed at the current level in the network, but are saved and incorporated into the next higher level subnetwork (possibly with a changed scope specification) during processing of the PUSH at that higher level, as follows:-(I) The form on the FOP arc to be returned as the value of the special register '*' on return to the next higher level is put into an explicit LIFTR action.(2) The scopes of all the saved actions are changed to the same as those of the SENDR actions on the PUSH arc.(3) All LIFTR actions are changed to highlvl-setr actions (see below).(q) Scoped calls to lowlvl-start and lowlvlfinish (see below) are put respectively before and after the saved actions.(5) All the SENDR actions on the PUSH arc are put in front of the lower level saved actions.The rest of the actions on the PUSH are are then processed as normal. The purposes of the actions lowlvl-start and lowlvl-finish are to respectively set up and restore a stack of register contexts (hold-regs), each level in the stack holding the register contents of one level in the network, with the base of the stack representing the highest level of saved actions. The action highlvl-setr performs a SETR at the next higher level of register contexts on the stack. ~ regs <-((, (NP nphrase)) (regO pphrase)) hold-regs <-NIL This would be translated into the list of saved actions on the left of figure 3, and when control had passed through a set of states such that the actions' scope specifications were satisfied, execution would produce the sequence of operations shown on the right of the figure.The second pass finds, for each sub-network, the names of all the registers whose values depend on other registers (for use in the subsequent scoping passes). It does this by finding the registers used in each register-setting action (SETR, LIFTR, or SENDR), using knowledge of the register usage of each function used, and for each register which is not being assigned to, it appends onto the property-list of the register the name of the register being set in the current action, and a pointer to that register's property-list.Thus in the end, each register is associated with a list of all the registers in the sub-network which depend on the value of that register.Pass four finds all actions that use registers that have been passed down from a higher level by a SENDR, and also actions which use registers dependent on those SENDRed registers, giving the actions scope SENDR.
null
As with LIFTR, SENDR actions need special scoping treatment: since there can be any type of interaction on a lower level between registers SENDRed and registers to be LIFTRed, the only safe execution time for actions using these registers and for actions referencing registers whose values depend on them (without engaging in full symbolic execution) is when the higher level sub-network has been fully traversed. There is a special scope specification for this-scope T.The process of writing scope clauses into the grammar for an island parser is laborious, and therefore prone to error. The implementation described here can automatically detect all contextsensitive actions and tests and put them into scope clauses containing suitable (and usually optimal) scope specifications. Thus the parser can interpret straight off an ATN grammar that has been written for an ordinary left-to-right parser.The sooping algorithm consists of five passes over the grammar, the first four dealing with the exceptional scoping required by LIFTR and SENDR actions, and the fifth with the rest of the actions and tests in the network. Comments on the algorithm follow the necessarily technical account of it.The five passes of the scoping algorithm will now be described, actions and tests in the network being treated identically.Pass one takes care of the scoping problem with LIFTR actions mentioned in the previous sectionthat a register being LIFTRed must be scoped back at the higher level to at least before the PUSH arc.But if the register is used on the PUSH arc itself, the scoping algorithm should produce correct scope specifications without needing to treat this as a special case. Thus the solution I have adopted is for the algorithm to check whether the register appears on the PUSH arc, and if not, the dummy action (SETR <register> (GETR <register>)) is added to the actions on the PUSH arc.Pass three deals with scoping SENDR actions, giving them the treatment described at the end of the last section -it assigns the scope specification T to all actions which reference registers whose values depend on any of the registers used in actions on the same PUSH arc as a SENDR action.The rest of the scoping is performed in pass five. Each action is considered in turn, collecting the names of all registers it uses, and the names of those whose values depend on them. The scope specification is then computed depending on the common pert of all possible paths from the start of the current sub-network to any action which is dependent on the action under consideration. This list of states ('left-states') is the intersection of the states to the left of each action which uses any of the collected registers.The algorithm distinguishes the following four cases for the contents of 'left-states':-(1) If NILthere are at least two nonintersecting paths from the left to the arc containing the action which reference registers dependent on those in the action, so return scope specification T.(2) All states in 'left-states' are in loops in the network -it is very difficult to compute the optimal scope specification, so return T (which will always be correct though perhaps not optimal). The problem with loops is that no register should be changed or referenced in a right-to-left parse until control has finally passed out of the loop.(3) The left state of the arc containing the action being scopad is in 'left-states', and the state is not in a loop-all dependent actions are to the right of the arc, so return NIL.(4) Otherwise -return as scope specification a list of all states in 'left-states' that are not in loops.If an action does not use any registers, it obviously does not need scoping, and the algorithm bypasses it. If a scope specification is returned for an action that is already scoped, whether the new scope 'overwrites' the old one depends on what is already there:scope SENDR overwrites scope T scope T overwrites scope <list of states> scope <list of states> is appended to an existing scope <list of states> B. Discussion of the Scoping AlgorithmThe algorithm does not produce totally optimal scope specifications in all circumstances: that is, actions may sometimes be scoped so they are saved for longer in the parse before they are executed than may strictly be necessary. The main shortcoming is in dealing with networks where there are two or more alternative separate paths containing actions using registers computed to be interdependent; for example in scoping the network fragment in figure 4, the two actions using register 'noun' are scoped (NP/) but the paths through them are independent and the register is not used elsewhere, so the actions do not need to be scoped at all. There does not seem to be any way around this problem by modifying the algorithm, but fortunately scope specifications that are not entirely optimal (as in this case) should only minimally affect the performance of the interpreter ~hen parsing a sentence. configtvations 'Sconfigs 'I at the boundaries of each island that are compatible, and then splice those that completely cover a sub-network into as many successively higher levels as possible (by calling Woods' 'Complete-right' function as many times as possible).In a real-time speech understanding system (depending on the strategy it employed), the time saved by this method could be critical to the success of the system.The parser has been tested (Carroll, 1982) with various sized (purely syntactic) grammars, simulating speech processing by the arbitrary selection of one or more words in a typed string as parsing starting points, and the arbitrary addition of words to the left and right of these.It has been observed that the more complex the structure of the sentence being parsed, the more Sconfigs get generated, and consequently the longer the parse takes. There are, however, other less obvious factors influencing the number of Sconfigs generated.Seonfigs tend to proliferate embarrassingly when there are many possible paths of JUMP arcs between states on the same level of the grammar due to scoped tests having to be saved and not being immediately executable.If there are no BENDR actions down to the subnetwork containing the JUMPs, then none of the saved tests will have to be carried up to a higher level, and so many of the Sconfigs will be filtered out when the POP arc st that level is processed. But if there are SENDR actions, the Sconfigs will not be filtered so effectively, will be carried up to higher levels, and at each higher level the number of Sconfigs will multiply.Sconfig proliferation and resulting combinatorial explosion will always be associated with island parsing usinog large complex grammars that are purely syntactic~; unfortunately LIFTR and SENDR actions aggravate the problem. However, the utility of these actions more than outweighs the consequent decrease in parse-time efficiency.In the HWIM system, to join together two adjacent islands to make one island covering them both, the smaller island was broken up and the words from it added onto the end of the larger. This obviously wastes all the effort expanded in building the smaller island.A more efficient method of joining two islands which I have implemented, is to merge all the segment I The state of the parse in an island parser is held as a list of segment configurations, each of which represents a partial parse covering one or more words in the utterance. 2 It seems that the HWIM parser also encountered these problems; their solution was to employ semantic grammars, with a large number of WRD arcs, to use both syntactic and semantic categories on CAT arcs, and to expand the set of constituents pushed for to include "semantic constituents".Parsing the same sentence with differing orders of adding the words in it to islands usually results in differing numbers of Sconflgs being created. For example, two parses of the sentence JOHN IS EAGER TO PLEASE. gave the results:run I run 2Sconfigs generated 388 182 parse time (secs.)1.77 1.08The difference was caused by the fact that in the first run, 'IS' was used as an initial island, setting up expectations for more possible distinct final sentence structures than in the second run, which started with the word 'PLEASE'. This difference in ex pectatlon status reflects the different structuring potential of the two words.Island parsing appears to offer a promising solution to the problem of parsing written as well as spoken sentences containing conjunctions; although the ATN formalism is quite powerful in expressing natural language grammars, it faces problems deal ing with sentences containing conjunctions: (WRD AND ...) arcs need to be inserted almost everywhere since AND can conjoin any two constituents of the same type. Boguraev (1982) has suggested that this problem might be overcome by building islands at each conjunction and parsing outwards from them. ATN. For this reason, restrictions might have to be placed on the ATN grammars used, but this requires further investigation.VII ACKNOWLEDGEMENTS I would like to thank Bran Boguraev for his guidance during the writing of the interpreter, and for supplying the ATN grammars I have used. Thanks also to Karen Sparck Jones and John Tait for their comments on earlier drafts of this paper.
null
Main paper: pass 2: The second pass finds, for each sub-network, the names of all the registers whose values depend on other registers (for use in the subsequent scoping passes). It does this by finding the registers used in each register-setting action (SETR, LIFTR, or SENDR), using knowledge of the register usage of each function used, and for each register which is not being assigned to, it appends onto the property-list of the register the name of the register being set in the current action, and a pointer to that register's property-list.Thus in the end, each register is associated with a list of all the registers in the sub-network which depend on the value of that register. scope problems: As with LIFTR, SENDR actions need special scoping treatment: since there can be any type of interaction on a lower level between registers SENDRed and registers to be LIFTRed, the only safe execution time for actions using these registers and for actions referencing registers whose values depend on them (without engaging in full symbolic execution) is when the higher level sub-network has been fully traversed. There is a special scope specification for this-scope T.The process of writing scope clauses into the grammar for an island parser is laborious, and therefore prone to error. The implementation described here can automatically detect all contextsensitive actions and tests and put them into scope clauses containing suitable (and usually optimal) scope specifications. Thus the parser can interpret straight off an ATN grammar that has been written for an ordinary left-to-right parser.The sooping algorithm consists of five passes over the grammar, the first four dealing with the exceptional scoping required by LIFTR and SENDR actions, and the fifth with the rest of the actions and tests in the network. Comments on the algorithm follow the necessarily technical account of it.The five passes of the scoping algorithm will now be described, actions and tests in the network being treated identically.Pass one takes care of the scoping problem with LIFTR actions mentioned in the previous sectionthat a register being LIFTRed must be scoped back at the higher level to at least before the PUSH arc.But if the register is used on the PUSH arc itself, the scoping algorithm should produce correct scope specifications without needing to treat this as a special case. Thus the solution I have adopted is for the algorithm to check whether the register appears on the PUSH arc, and if not, the dummy action (SETR <register> (GETR <register>)) is added to the actions on the PUSH arc.Pass three deals with scoping SENDR actions, giving them the treatment described at the end of the last section -it assigns the scope specification T to all actions which reference registers whose values depend on any of the registers used in actions on the same PUSH arc as a SENDR action. pass 4: Pass four finds all actions that use registers that have been passed down from a higher level by a SENDR, and also actions which use registers dependent on those SENDRed registers, giving the actions scope SENDR. pass 5: The rest of the scoping is performed in pass five. Each action is considered in turn, collecting the names of all registers it uses, and the names of those whose values depend on them. The scope specification is then computed depending on the common pert of all possible paths from the start of the current sub-network to any action which is dependent on the action under consideration. This list of states ('left-states') is the intersection of the states to the left of each action which uses any of the collected registers.The algorithm distinguishes the following four cases for the contents of 'left-states':-(1) If NILthere are at least two nonintersecting paths from the left to the arc containing the action which reference registers dependent on those in the action, so return scope specification T.(2) All states in 'left-states' are in loops in the network -it is very difficult to compute the optimal scope specification, so return T (which will always be correct though perhaps not optimal). The problem with loops is that no register should be changed or referenced in a right-to-left parse until control has finally passed out of the loop.(3) The left state of the arc containing the action being scopad is in 'left-states', and the state is not in a loop-all dependent actions are to the right of the arc, so return NIL.(4) Otherwise -return as scope specification a list of all states in 'left-states' that are not in loops.If an action does not use any registers, it obviously does not need scoping, and the algorithm bypasses it. If a scope specification is returned for an action that is already scoped, whether the new scope 'overwrites' the old one depends on what is already there:scope SENDR overwrites scope T scope T overwrites scope <list of states> scope <list of states> is appended to an existing scope <list of states> B. Discussion of the Scoping AlgorithmThe algorithm does not produce totally optimal scope specifications in all circumstances: that is, actions may sometimes be scoped so they are saved for longer in the parse before they are executed than may strictly be necessary. The main shortcoming is in dealing with networks where there are two or more alternative separate paths containing actions using registers computed to be interdependent; for example in scoping the network fragment in figure 4, the two actions using register 'noun' are scoped (NP/) but the paths through them are independent and the register is not used elsewhere, so the actions do not need to be scoped at all. There does not seem to be any way around this problem by modifying the algorithm, but fortunately scope specifications that are not entirely optimal (as in this case) should only minimally affect the performance of the interpreter ~hen parsing a sentence. configtvations 'Sconfigs 'I at the boundaries of each island that are compatible, and then splice those that completely cover a sub-network into as many successively higher levels as possible (by calling Woods' 'Complete-right' function as many times as possible).In a real-time speech understanding system (depending on the strategy it employed), the time saved by this method could be critical to the success of the system.The parser has been tested (Carroll, 1982) with various sized (purely syntactic) grammars, simulating speech processing by the arbitrary selection of one or more words in a typed string as parsing starting points, and the arbitrary addition of words to the left and right of these.It has been observed that the more complex the structure of the sentence being parsed, the more Sconfigs get generated, and consequently the longer the parse takes. There are, however, other less obvious factors influencing the number of Sconfigs generated.Seonfigs tend to proliferate embarrassingly when there are many possible paths of JUMP arcs between states on the same level of the grammar due to scoped tests having to be saved and not being immediately executable.If there are no BENDR actions down to the subnetwork containing the JUMPs, then none of the saved tests will have to be carried up to a higher level, and so many of the Sconfigs will be filtered out when the POP arc st that level is processed. But if there are SENDR actions, the Sconfigs will not be filtered so effectively, will be carried up to higher levels, and at each higher level the number of Sconfigs will multiply.Sconfig proliferation and resulting combinatorial explosion will always be associated with island parsing usinog large complex grammars that are purely syntactic~; unfortunately LIFTR and SENDR actions aggravate the problem. However, the utility of these actions more than outweighs the consequent decrease in parse-time efficiency.In the HWIM system, to join together two adjacent islands to make one island covering them both, the smaller island was broken up and the words from it added onto the end of the larger. This obviously wastes all the effort expanded in building the smaller island.A more efficient method of joining two islands which I have implemented, is to merge all the segment I The state of the parse in an island parser is held as a list of segment configurations, each of which represents a partial parse covering one or more words in the utterance. 2 It seems that the HWIM parser also encountered these problems; their solution was to employ semantic grammars, with a large number of WRD arcs, to use both syntactic and semantic categories on CAT arcs, and to expand the set of constituents pushed for to include "semantic constituents".Parsing the same sentence with differing orders of adding the words in it to islands usually results in differing numbers of Sconflgs being created. For example, two parses of the sentence JOHN IS EAGER TO PLEASE. gave the results:run I run 2Sconfigs generated 388 182 parse time (secs.)1.77 1.08The difference was caused by the fact that in the first run, 'IS' was used as an initial island, setting up expectations for more possible distinct final sentence structures than in the second run, which started with the word 'PLEASE'. This difference in ex pectatlon status reflects the different structuring potential of the two words.Island parsing appears to offer a promising solution to the problem of parsing written as well as spoken sentences containing conjunctions; although the ATN formalism is quite powerful in expressing natural language grammars, it faces problems deal ing with sentences containing conjunctions: (WRD AND ...) arcs need to be inserted almost everywhere since AND can conjoin any two constituents of the same type. Boguraev (1982) has suggested that this problem might be overcome by building islands at each conjunction and parsing outwards from them. ATN. For this reason, restrictions might have to be placed on the ATN grammars used, but this requires further investigation.VII ACKNOWLEDGEMENTS I would like to thank Bran Boguraev for his guidance during the writing of the interpreter, and for supplying the ATN grammars I have used. Thanks also to Karen Sparck Jones and John Tait for their comments on earlier drafts of this paper. island: parsing is a powerful technique for parsing with Augmented ~ansition Networks (ATNs) which was developed and successfully applied in the HWIM speech understanding project. The HWIM application grammar did not, however, exploit Woods' original full ATN specification. This paper describes an island parsing interpreter based on HWIM, but containing substantial and important extensions to enable it to interpret any grammar which conforms to that full specification of 1970. The most important contributions have been to eliminate the need for prior specification of scope clauses, to provide more power by implementing LIFTR and SENDR actions within the island parsing framework, and to improve the efficiency of the techniques used to merge together partially-built islands within the utterance. This paper also presents some observations about island parsing, based on the use of the parser described, and some suggestions for future directions for island parsing research.In an ordinary ATN parser, the parsing of a sentence is performed unidirectionally (normally left-to-right); the parser traverses each arc in the directed graph of the grammar in the same direction, starting from the initial state.An island ATN parser, on the other hand, can start at any point in the transition network with a word match from anywhere in the input string, not just at the left end, and parse the rest of the string working outwards to the left and right, adding words to each end of the 'island' formed. Indeed, any number of islands can be built, the parser merging the islands together as their boundaries meet. Clearly, in speech processing, island parsing is well suited to gearing sentence processing to the most solid inputs from the acoustic anal yser.The main problems with previous implementations of island parsing for ATNs have been with scope clauses and LIFTR and SENDR actions; essentially, these problems arise because in island parsing structure determination has to work from right-to-left as well as in the more usual left-to-right direction, i.e. against the normal parsing flow.The ATN formalism provides for actions on the arcs of the network which can set and modify the contents of 'registers', and arbitrary tests on an arc to determine whether that arc is to be followed. In an island parser, an action or test is referred to as being context-sensitive when it either requires the value of a register that is set somewhere to the left, or changes the value of a register that is used somewhere also to the left. For each context sensitive action or test, there exists a set of states to its left such that the action can safely be performed if its execution is delayed until the parse has passed through one of these states. This list of states must be expressed, and in the HWIM system (Woods, 1976) , this is done when writing the grammar by using a scope clause. The form of a scope clause is (SCOPE <scope specification> <list of context-sensltive actions>)where the scope specification is the list of precursor states. This requirement for prior specification of scope clauses clearly adds to the burden of the grammar writer.I have implemented a more satisfactory treatment of scope clauses. This is described belo~ following the discussion of LIFTR and SENDR actions, which require special handling in scoping.Two important actions (indeed it is difficult to write a grammar of any substantial subset of English without them) defined by Woods (1970) , namely LIFTR and SENDR, present implementation difficulties in an island parsing interpreter. These actions were evidently excluded from the HWIM parser since there is no mention of them by Woods (1976) .The action LIFTR can occur on any arc in the network, to transmit the value of a register up to the next higher level in the network, whereas SENDR can only occur on a PUSH arc, to transmit the value of a register down to a lower level.The same mechanism can be used to implement LIFTR actions as is used to transmit the result of each lower level computation up to the next higher level as the value of the special register '*'.However, LIFTR presents problems with scope clauses in an island parsing ATN interpreter: if an action (LIFTR <register> ...) occurs in a sub-network, any action using that register in any higher sub-network that PUSHes for the one containing the LIFTR must be scoped so that the action is not performed in a right-to-left parse at least until after the PUSH has been executed'. See figure I. I PUSH// action using <register> here must be scoped to before the PUSH arc$ \POP \ ¢ (LIFTR <register> .) Figtre I. Scoping LIFTR actions.So, for example, when parsing English from right to left, tests that the verb and subject agree in person and number (if this information is carried in registers) must be postponed until the PUSH for the beginning of the subject noun phrase. Section III describes how my interpreter takes care of this scoping problem.Since in a right-to-left parse, lower level subnetworks are traversed before the PUSH to them is performed, there is no way of knowing the value of a register that is being SENDRed at least until after the PUSH. Thus all actions involving registers whose values depend on the value of that register must he saved to be executed at the higher level.I have dealt with this by putting such actions into SCOPE clauses containing a special new scope specification, which I call scope SENDR. Actions with scope SENDR are never executed at the current level in the network, but are saved and incorporated into the next higher level subnetwork (possibly with a changed scope specification) during processing of the PUSH at that higher level, as follows:-(I) The form on the FOP arc to be returned as the value of the special register '*' on return to the next higher level is put into an explicit LIFTR action.(2) The scopes of all the saved actions are changed to the same as those of the SENDR actions on the PUSH arc.(3) All LIFTR actions are changed to highlvl-setr actions (see below).(q) Scoped calls to lowlvl-start and lowlvlfinish (see below) are put respectively before and after the saved actions.(5) All the SENDR actions on the PUSH arc are put in front of the lower level saved actions.The rest of the actions on the PUSH are are then processed as normal. The purposes of the actions lowlvl-start and lowlvl-finish are to respectively set up and restore a stack of register contexts (hold-regs), each level in the stack holding the register contents of one level in the network, with the base of the stack representing the highest level of saved actions. The action highlvl-setr performs a SETR at the next higher level of register contexts on the stack. ~ regs <-((, (NP nphrase)) (regO pphrase)) hold-regs <-NIL This would be translated into the list of saved actions on the left of figure 3, and when control had passed through a set of states such that the actions' scope specifications were satisfied, execution would produce the sequence of operations shown on the right of the figure. Appendix:
null
null
null
null
{ "paperhash": [ "woods|cascaded_atn_grammars" ], "title": [ "Cascaded ATN Grammars" ], "abstract": [ "A generalization of the notion of ATN grammar, called a cascaded ATN (CATN), is presented. CATN's permit a decomposition of complex language understanding behavior into a sequence of cooperating ATN's with separate domains of responsibility, where each stage (called an ATN transducer) takes its input from the output of the previous stage. The paper includes an extensive discussion of the principle of factoring -- conceptual factoring reduces the number of places that a given fact needs to be represented in a grammar, and hypothesis factoring reduces the number of distinct hypotheses that have to be considered during parsing." ], "authors": [ { "name": [ "W. Woods" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "6169596" ], "intents": [ [] ], "isInfluential": [ false ] }
- Problem: The paper discusses the limitations of previous implementations of island parsing for Augmented Transition Networks (ATNs), specifically focusing on issues with scope clauses and LIFTR and SENDR actions in the parsing process. - Solution: The paper proposes an island parsing interpreter based on the HWIM system with extensions to handle any grammar conforming to the full ATN specification of 1970. The key contributions include eliminating the need for prior specification of scope clauses, implementing LIFTR and SENDR actions within the island parsing framework, and improving the efficiency of merging partially-built islands within the utterance.
497
0.016097
null
null
null
null
null
null
null
null
4863fadc066236cd92e29db0ae4b5e21fd8dc47e
777436
null
Knowledge Engineering Approach to Morphological Analysis
Finnish is a highly inflectional language. A verb can have over ten thousand different surface forms -nominals slightly fewer. Consequently, a morphological analyzer is an important component of a system aiming at "understanding" Finnish. This paper briefly describes our rule-based heuristic analyzer for Finnish nominal and verb forms. Our tests have shown it to be quite efficient: the analysis of a Finnish word in a running text takes an average of 15 ms of DEC 20 CPU-time. I
{ "name": [ "J{\\\"a}ppinen, Harri and", "Lehtola, Aarno and", "Nelimarkka, Esa and", "Ylilammi, Matti" ], "affiliation": [ null, null, null, null ] }
null
null
First Conference of the {E}uropean Chapter of the Association for Computational Linguistics
1983-09-01
6
14
null
This paper briefly discusses the application of rule-based systems to the morphological analysis of Finnish word forms. Production systems seem to us a convenient way to express the strongly context-sensltive segmentation of Finnish word forms. This work demonstrates that they can be implemented to efficiently perform segmentations and uncover their interpretations.system aiming at interpreting a highly inflectional language, such as Finnish, the morphological analysis of word forms is an important component. Inflectional suffixes carry syntactic and semantic information which is necessary for a syntactic and logical analysis of a sentence.In contrast to major Indo-European languages, such as English, where morphological analysis is often so simple that reports of systems processing these languages usually omit morphological discussion, the analysis of Finnish word forms is a hard problem.A few algorithmic approaches, i.e. methods using precise and fully-informed decisions, to a morphological analysis of Finnish have been reported. Brodda and Karlsson (1981) attempted to find the most probable morphological segmentation for an arbitrary Finnish surface-word form without a reference to a lexicon. They report surprisingly high success, close to 90 %. However, their system neither transforms stems into a basic form, nor finds morphotactic interpretations. Karttunen al. (1981) report a LISP-program which searches in a root lexicon and in four segment tables for adjacent parts, which generate a given surfaceword form. Koskenniami (1983) describes a relational, symmetric model for analysis, as well as for production of Finnish word forms. He, too, uses a word-root lexicon and suffix lexicons to support comparisons between surface and lexical levels.Our morphological analyzer MORFIN was planned to constitute the first component in our forthcoming Finnish natural-language database query system. We therefore rate highly a computationally efficient method which supports an open lexicon. Lexical entries should carry the minimum of morphological information to allow a casual user to add new entries.We relaxed the requirement of fully informed decisions in favor of progressively generated and tested plausible heuristic hypotheses, dressed in production rules. The analysis of a word in our model represents a multi-level heuristic search. The basic control strategy of MORFIN resembles the one more extensively exploited in the Hearsay-II system (Erman et al.,1980) .Finnish morphotactics is complex by any ordinary standard. Nouns, adjectives and verbs take numerous different forms to express case, number, possession, tense, mood, person and other morpheme categories. The problem of analysis is greatly aggravated by context sensitivity. A word stem may obtain different forms depending on the suffixes attached to it. Some morphemes have stem-dependent segments, and some segments are affected by other segments juxtaposed to it.Due to lack of space, we outline here only the structure of Finnish nominals. The surface form of a Finnish nominal ~ay be composed of the following constituents (parentheses denote optionality) : (I) root + ste~ending + number + case + (possessive) + (clitic)The stem endings comprise a large collection of highly context-sensitive segments which link the word roots with the number and case suffixes in phonologically sound ways. The authorative Dictionary of Contemporary Finnish classifies nomi-nals into 85 distinct paradigms based on the variation in their stem endings in the nominative, genetive, partitive, essive, and illative cases. The plural in a nominal is signaled by an 'i', 'j', 't', or the null string (4) depending on the context. The fourteen cases used in Finnish are expressed by one or more suffix types each. Furthermore, consonant gradation may take place in the roots and stem endlngs with certain manifestations of 'p', 't' or 'k'.As an example, consider the word 'pursi' (=yacht). The dictionary representation 'pu~ si 42' indicates the root 'put', . the stem ending 'si' in the nominative singular case, and the paradigm number 42. Among others, we have the inflections 2pur Morpheme productions recognize legal morphological surface-segment configurations in a word, and slice and interprete the word accordingly. We use directly the allomorphic variants of the morphemes. Since possible segment configurations overlap, several mutually exclusive hypotheses are usually produced on the morphotactic level. All valid interpretations of a homographic word form are among them.The extracted rules were packed and compiled into a network of 33 distinct state-transition automata (3 for clitic, I for person, 6 for tense, 3 for case, 2 for number, 5 for adjective comparation, 3 for passive, 5 for participle, and 5 for infinitive segments). These automata were generated by 204 morpheme productions of the form:(4) name: (2nd_context)(Ist context)segment --> POSTULATE-~int er pr etat i on, next ) 'Segment' exhibits an allomorph; the optional 'Ist' and '2nd contexts' indicate 0 to 2 leftcontextual letters. The operation POSTULATE separates a recognized segment, attaches an interpretation to it, and proceeds to the indicated automata ('next'). For example, the production recognizes the substring 'n', if preceeded by a vowel, as an allomorph for the singular genetive case, separates 'n', and proceeds in parallel to two automatons for number, three for participles, two for infinitive, and one for comparation.Stem productions are case-and numberspecific heuristic rules (genus-, mood-and tensespeslflc for verbs) postulating nominative singular nouns as basic forms (Ist infinitive for verbs) which, under the postulated morphotactic interpretation, might have resulted in the observed stem form on the morphotactic level. They may reject a candidate stem-form as an impossible transformation, or produce one or more basic-form hypotheses.The Reverse Dictionary of Finnish lists close to 100 000 Finnish words sorted backwards. For each word the dictionary tags its syntactic category and the paradigm number. From that corpus we extracted heuristic information about equivalence classes of stem behavior. This knowledge we dressed into productions of the following form:(6) condition --> POSTULATE (cut,string,shift) If the condition of a production is satisfied, a basic-form hypothesis is postulated on the basic word-form level by cutting the recognized stem, adding a new string (separated by a blank to indicate the boundary between the root and the stem ending), and possibly shifting the blank. These operations are indicated by the arguments 'cut', 'string', and 'shift'. A well-formed condition (WFC) is defined recursively as follows. Any letter in the Finnish alphabet is a WFC, and such a condition is true if the last letter of a stem matches the letter. If &1 ,&2,-.. ,&n are WFCs, then the following constructions are also WFCs:(7) (1) &2&l (II) <&1 ,&2, • • • ,&n > (I)is true if &1 and &2 are true, in that order, under the stipulation that the recognized letters in a stem are consomed. (II) is true if &1 or &2 or ... or &n is true. The testing in (II) proceeds from left to right and halts if recognition occurs. The recognized letters are cons~ed. A capital letter can be used as a macro name for a WFC. For example, a genetive 'n'-specific production (8) <Ka,y>hde ~> POSTULATE(3,'ksi',0) ('K' is an abbreviation for <d,f,g,h...> -the consonants) recognizes, among other stems, the genetive stem 'kahde' and generates the basic form hypothesis 'ka ksi' (: two).We collected 12 sets of productions for nominal and 6 for verb stems. On average, a set has about 20 rules. These sets were compiled into 18 efficient state-transition automata.We could also apply productions to consonant gradation. However, since a Finnish word can have at most two stems (weak and strong), MORFIN trades storage for computation and stores double stems in the lexicon.Dictionary Look-upThe dictionary lock-up procedure confirms or rejects the baslc-word form hypotheses that have proliferated from the previous stages by matching them against the lexicon. Thus in MORFIN the only morphological information a dictionary entry carries is the boundary between the root and the stem ending in the basic-word form and grade. All other morphological knowledge is stored in MORFIN in an active form as rules.In MORFIN, input words are totally analyzed before a reference to the lexicon happens. Consequently, also words not existing in the lexicon are analyzed. This fact and the simple lexical form make it easy to add new words in the lexicon: a user simply chooses the right alternative(s) from postulated baslc-word form hypotheses.MORFIN has been fully implemented in standard PASCAL and is in the final stages of testing. The lexicons contain nearly 2000 most frequent Finnish words. In addition to one lexicon for nominals, and one for verbs, MORFIN has two "front" lexicons for unvarying words, and words with slight variation (pronouns, adverbs etc. and those with exceptional forms).Currently MORFIN does not analyze compound nouns into parts (as Karttunen et al. (1981) and Koskenniemi (1983) do). By modifying our system slightly we could do this by calling the system recursively. We rejected this kind of analysis because the semantics of many compounds must be stored as separate lexical entries in our database interface anyway. MORFIN does not 2roduce word • forms as the other two systems do.With respect to the goals we set, our tests rate MORFIN quite well (J~ppinen et al., 1983) . Lexical entries are simple and their addition is easy. On average, only around 4 basic-word form hypotheses are produced on the basic-word form level. The analysis of a word in randomly selected newspaper texts takes about 15 ms of DEC 2060 CPUtime. Karttunen et al. (1981) report on their system that "It can analyze a short unambiguous word in less than 20 ms [DEC-2060/Interlisp] ... a long word or a compound ... can take ten times longer." Koskenniemi (1983) writes that "with a large lexicon it L1~is system] takes about 0.1CPU seconds EBurroughs B7800/PASCAL] to analyze a reasonably complicated word form."Both Karttunen et al. (1981) and Koskenniemi (1983) proceed from left to right and compare an input word with forms generated from lexical entries. It is not clear how such models explain the phenomenon that a native speaker of Finnish spontaneously analyzes also granm~atical but meaningless word forms. Most Finns would probably agree that, for instance, 'vimpuloissa' is a plural inessive form of a meaningless word 'vimpula'. How can a model based on comparison function when there is no lexical entry to be compared with? Our model encounters no problems with new or meaningless words. 'Vimpuloissa', if given as an input, would produce, among others, the hypothesis 'vimpul a' with correct interpretation. It would be rejected only because it is a nonexistent Finnish word.
null
null
null
null
Main paper: introduction: This paper briefly discusses the application of rule-based systems to the morphological analysis of Finnish word forms. Production systems seem to us a convenient way to express the strongly context-sensltive segmentation of Finnish word forms. This work demonstrates that they can be implemented to efficiently perform segmentations and uncover their interpretations.system aiming at interpreting a highly inflectional language, such as Finnish, the morphological analysis of word forms is an important component. Inflectional suffixes carry syntactic and semantic information which is necessary for a syntactic and logical analysis of a sentence.In contrast to major Indo-European languages, such as English, where morphological analysis is often so simple that reports of systems processing these languages usually omit morphological discussion, the analysis of Finnish word forms is a hard problem.A few algorithmic approaches, i.e. methods using precise and fully-informed decisions, to a morphological analysis of Finnish have been reported. Brodda and Karlsson (1981) attempted to find the most probable morphological segmentation for an arbitrary Finnish surface-word form without a reference to a lexicon. They report surprisingly high success, close to 90 %. However, their system neither transforms stems into a basic form, nor finds morphotactic interpretations. Karttunen al. (1981) report a LISP-program which searches in a root lexicon and in four segment tables for adjacent parts, which generate a given surfaceword form. Koskenniami (1983) describes a relational, symmetric model for analysis, as well as for production of Finnish word forms. He, too, uses a word-root lexicon and suffix lexicons to support comparisons between surface and lexical levels.Our morphological analyzer MORFIN was planned to constitute the first component in our forthcoming Finnish natural-language database query system. We therefore rate highly a computationally efficient method which supports an open lexicon. Lexical entries should carry the minimum of morphological information to allow a casual user to add new entries.We relaxed the requirement of fully informed decisions in favor of progressively generated and tested plausible heuristic hypotheses, dressed in production rules. The analysis of a word in our model represents a multi-level heuristic search. The basic control strategy of MORFIN resembles the one more extensively exploited in the Hearsay-II system (Erman et al.,1980) .Finnish morphotactics is complex by any ordinary standard. Nouns, adjectives and verbs take numerous different forms to express case, number, possession, tense, mood, person and other morpheme categories. The problem of analysis is greatly aggravated by context sensitivity. A word stem may obtain different forms depending on the suffixes attached to it. Some morphemes have stem-dependent segments, and some segments are affected by other segments juxtaposed to it.Due to lack of space, we outline here only the structure of Finnish nominals. The surface form of a Finnish nominal ~ay be composed of the following constituents (parentheses denote optionality) : (I) root + ste~ending + number + case + (possessive) + (clitic)The stem endings comprise a large collection of highly context-sensitive segments which link the word roots with the number and case suffixes in phonologically sound ways. The authorative Dictionary of Contemporary Finnish classifies nomi-nals into 85 distinct paradigms based on the variation in their stem endings in the nominative, genetive, partitive, essive, and illative cases. The plural in a nominal is signaled by an 'i', 'j', 't', or the null string (4) depending on the context. The fourteen cases used in Finnish are expressed by one or more suffix types each. Furthermore, consonant gradation may take place in the roots and stem endlngs with certain manifestations of 'p', 't' or 'k'.As an example, consider the word 'pursi' (=yacht). The dictionary representation 'pu~ si 42' indicates the root 'put', . the stem ending 'si' in the nominative singular case, and the paradigm number 42. Among others, we have the inflections 2pur Morpheme productions recognize legal morphological surface-segment configurations in a word, and slice and interprete the word accordingly. We use directly the allomorphic variants of the morphemes. Since possible segment configurations overlap, several mutually exclusive hypotheses are usually produced on the morphotactic level. All valid interpretations of a homographic word form are among them.The extracted rules were packed and compiled into a network of 33 distinct state-transition automata (3 for clitic, I for person, 6 for tense, 3 for case, 2 for number, 5 for adjective comparation, 3 for passive, 5 for participle, and 5 for infinitive segments). These automata were generated by 204 morpheme productions of the form:(4) name: (2nd_context)(Ist context)segment --> POSTULATE-~int er pr etat i on, next ) 'Segment' exhibits an allomorph; the optional 'Ist' and '2nd contexts' indicate 0 to 2 leftcontextual letters. The operation POSTULATE separates a recognized segment, attaches an interpretation to it, and proceeds to the indicated automata ('next'). For example, the production recognizes the substring 'n', if preceeded by a vowel, as an allomorph for the singular genetive case, separates 'n', and proceeds in parallel to two automatons for number, three for participles, two for infinitive, and one for comparation.Stem productions are case-and numberspecific heuristic rules (genus-, mood-and tensespeslflc for verbs) postulating nominative singular nouns as basic forms (Ist infinitive for verbs) which, under the postulated morphotactic interpretation, might have resulted in the observed stem form on the morphotactic level. They may reject a candidate stem-form as an impossible transformation, or produce one or more basic-form hypotheses.The Reverse Dictionary of Finnish lists close to 100 000 Finnish words sorted backwards. For each word the dictionary tags its syntactic category and the paradigm number. From that corpus we extracted heuristic information about equivalence classes of stem behavior. This knowledge we dressed into productions of the following form:(6) condition --> POSTULATE (cut,string,shift) If the condition of a production is satisfied, a basic-form hypothesis is postulated on the basic word-form level by cutting the recognized stem, adding a new string (separated by a blank to indicate the boundary between the root and the stem ending), and possibly shifting the blank. These operations are indicated by the arguments 'cut', 'string', and 'shift'. A well-formed condition (WFC) is defined recursively as follows. Any letter in the Finnish alphabet is a WFC, and such a condition is true if the last letter of a stem matches the letter. If &1 ,&2,-.. ,&n are WFCs, then the following constructions are also WFCs:(7) (1) &2&l (II) <&1 ,&2, • • • ,&n > (I)is true if &1 and &2 are true, in that order, under the stipulation that the recognized letters in a stem are consomed. (II) is true if &1 or &2 or ... or &n is true. The testing in (II) proceeds from left to right and halts if recognition occurs. The recognized letters are cons~ed. A capital letter can be used as a macro name for a WFC. For example, a genetive 'n'-specific production (8) <Ka,y>hde ~> POSTULATE(3,'ksi',0) ('K' is an abbreviation for <d,f,g,h...> -the consonants) recognizes, among other stems, the genetive stem 'kahde' and generates the basic form hypothesis 'ka ksi' (: two).We collected 12 sets of productions for nominal and 6 for verb stems. On average, a set has about 20 rules. These sets were compiled into 18 efficient state-transition automata.We could also apply productions to consonant gradation. However, since a Finnish word can have at most two stems (weak and strong), MORFIN trades storage for computation and stores double stems in the lexicon.Dictionary Look-upThe dictionary lock-up procedure confirms or rejects the baslc-word form hypotheses that have proliferated from the previous stages by matching them against the lexicon. Thus in MORFIN the only morphological information a dictionary entry carries is the boundary between the root and the stem ending in the basic-word form and grade. All other morphological knowledge is stored in MORFIN in an active form as rules.In MORFIN, input words are totally analyzed before a reference to the lexicon happens. Consequently, also words not existing in the lexicon are analyzed. This fact and the simple lexical form make it easy to add new words in the lexicon: a user simply chooses the right alternative(s) from postulated baslc-word form hypotheses.MORFIN has been fully implemented in standard PASCAL and is in the final stages of testing. The lexicons contain nearly 2000 most frequent Finnish words. In addition to one lexicon for nominals, and one for verbs, MORFIN has two "front" lexicons for unvarying words, and words with slight variation (pronouns, adverbs etc. and those with exceptional forms).Currently MORFIN does not analyze compound nouns into parts (as Karttunen et al. (1981) and Koskenniemi (1983) do). By modifying our system slightly we could do this by calling the system recursively. We rejected this kind of analysis because the semantics of many compounds must be stored as separate lexical entries in our database interface anyway. MORFIN does not 2roduce word • forms as the other two systems do.With respect to the goals we set, our tests rate MORFIN quite well (J~ppinen et al., 1983) . Lexical entries are simple and their addition is easy. On average, only around 4 basic-word form hypotheses are produced on the basic-word form level. The analysis of a word in randomly selected newspaper texts takes about 15 ms of DEC 2060 CPUtime. Karttunen et al. (1981) report on their system that "It can analyze a short unambiguous word in less than 20 ms [DEC-2060/Interlisp] ... a long word or a compound ... can take ten times longer." Koskenniemi (1983) writes that "with a large lexicon it L1~is system] takes about 0.1CPU seconds EBurroughs B7800/PASCAL] to analyze a reasonably complicated word form."Both Karttunen et al. (1981) and Koskenniemi (1983) proceed from left to right and compare an input word with forms generated from lexical entries. It is not clear how such models explain the phenomenon that a native speaker of Finnish spontaneously analyzes also granm~atical but meaningless word forms. Most Finns would probably agree that, for instance, 'vimpuloissa' is a plural inessive form of a meaningless word 'vimpula'. How can a model based on comparison function when there is no lexical entry to be compared with? Our model encounters no problems with new or meaningless words. 'Vimpuloissa', if given as an input, would produce, among others, the hypothesis 'vimpul a' with correct interpretation. It would be rejected only because it is a nonexistent Finnish word. Appendix:
null
null
null
null
{ "paperhash": [ "koskenniemi|two-level_model_for_morphological_analysis", "erman|the_hearsay-ii_speech-understanding_system:_integrating_knowledge_to_resolve_uncertainty" ], "title": [ "Two-Level Model for Morphological Analysis", "The Hearsay-II Speech-Understanding System: Integrating Knowledge to Resolve Uncertainty" ], "abstract": [ "This paper presents a new linguistic, computationally implemented model for morphological analysis and synthesis. It is general in the sense that the same language independent algorithm and the same computer program can operate on a wide range of languages, including highly inflected ones such as Finnish, Russian or Sanskrit. The new model is unrestricted in scope and it is capable of handling the whole language system as well as ordinary running text. A full description for Finnish has been completed and tested, and the entries in the Dictionary of Modern Standard Finnish have been converted into a format compatible with it. \n \nThe model is based on a lexicon that defines the word roots, inflectional morphemes and certain nonphonological alternation patterns, and on a set of parallel rules that define phonologically oriented phenomena. The rules are implemented as parallel finite state automata, and the same description can be run both in the producing and in the analyzing direction.", "The Hearsay-II system, developed during the DARPA-sponsored five-year speech-understanding research program, represents both a specific solution to the speech-understanding problem and a general framework for coordinating independent processes to achieve cooperative problem-solving behavior. As a computational problem, speech understanding reflects a large number of intrinsically interesting issues. Spoken sounds are achieved by a long chain of successive transformations, from intentions, through semantic and syntactic structuring, to the eventually resulting audible acoustic waves. As a consequence, interpreting speech means effectively inverting these transformations to recover the speaker's intention from the sound. At each step in the interpretive process, ambiguity and uncertainty arise. \n \nThe Hearsay-II problem-solving framework reconstructs an intention from hypothetical interpretations formulated at various levels of abstraction. In addition, it allocates limited processing resources first to the most promising incremental actions. The final configuration of the Hearsay-II system comprises problem-solving components to generate and evaluate speech hypotheses, and a focus-of-control mechanism to identify potential actions of greatest value. Many of these specific procedures reveal novel approaches to speech problems. Most important, the system successfully integrates and coordinates all of these independent activities to resolve uncertainty and control combinatorics. Several adaptations of the Hearsay-II framework have already been undertaken in other problem domains, and it is anticipated that this trend will continue; many future systems necessarily will integrate diverse sources of knowledge to solve complex problems cooperatively. \n \nDiscussed in this paper are the characteristics of the speech problem in particular, the special kinds of problem-solving uncertainty in that domain, the structure of the Hearsay-II system developed to cope with that uncertainty, and the relationship between Hearsay-II's structure and those of other speech-understanding systems. The paper is intended for the general computer science audience and presupposes no speech or artificial intelligence background." ], "authors": [ { "name": [ "K. Koskenniemi" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "L. Erman", "F. Hayes-Roth", "V. Lesser", "R. Reddy" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "2816585", "118556" ], "intents": [ [ "background" ], [ "methodology" ] ], "isInfluential": [ false, false ] }
null
497
0.028169
null
null
null
null
null
null
null
null
d4ff6b3d6ba7cb8b21b4dfb955d4e6b262832b50
33534751
null
Logos: the intelligent translation system
Logos is stated to be a full-capability (lexical, syntactic, semantic) MT system on a standard
{ "name": [ "Hawes, Ralph E." ], "affiliation": [ null ] }
null
null
Proceedings of Translating and the Computer 5: Tools for the trade
1983-11-01
0
1
null
I would like to introduce our company -Logos Corporationand its product, Logos™, the Intelligent Translation System™. I know that Logos is not a complete stranger to those of you who have been attending these conferences during the past several years. I searched past proceedings and came across a few brief references to it, but usually there were few details. I think people saw us as a special projects company, which had written translation programs for English to Vietnamese and English to Farsi.Our work in these and other languages like French, Russian and German helped us to develop our product and our company, which has become much more than a company for special language projects.Logos recognised that our hard-earned expertise in natural language processing coupled with the past decades' giant technological advances in computer price/performance provided us with a unique opportunity: to bring together a number of proven, powerful translation techniques plus some special ideas of our own, in a hardware-and-software environment that is economical and friendly. This blend of ingredients has resulted in our offering Logos, the Tools for the Trade, V. Lawson (ed,) , © Aslib and Ralph E. Hawes.Intelligent Translation System, on Wang, which we believe to be the world's leading word processing system. The topics that I will cover are:-The company -Logos Corporation -The characteristics of a desirable language translation system -Logos, the Intelligent Translation System.Ever since the ALPAC report in 1966, which discouraged efforts to produce workable machine translation (MT) of natural language, workers in MT have started only a few new projects (that is, until very recently). In general, their results have been widely reported. But one of the largest, and most sustained, efforts in this field has, until now, received little attention: the effort of Logos Corporation.Logos Corporation is an American company with a German subsidiary. Corporate and marketing headquarters are in the US, along with a development staff of about seventy people. Our current sales efforts are conducted from our sales office in Frankfurt, West Germany.The company was founded in 1969 by Bernard E. Scott, who is its president and principal linguist. Supported by private capital and development contracts, the company has always pursued a single goal: workable machine translation .In the 1970s, the company's best-known technical project was the production of English-to-Vietnamese translation for the US government. Subsequent projects entailed work in French, Russian, Farsi, Spanish and German. When we directed our efforts to developing the commercial Logos product, we were able to apply the lessons learned in these prior efforts, and concentrate on incorporating proved techniques while at the same time avoiding many traps and pitfalls.All our years of experience have contributed to the specification and development of the Intelligent Translation System, which incorporates a language-independent Automatic Translator™ of advanced capabilities.We believe the Logos system is the world's most advanced translation system, embodying a state-of-the-art translation program free from architectural or logical limitations, with unprecedented capability for translating accurately.Let's briefly examine the most desirable characteristics of a robust automated language translation system. To begin with:-The system should be capable of translating on its own, at computer speed, without operator intervention.-The system should be able to accept text to be translated in an economic, reliable, friendly manner. There should be several input options ranging from keyboard to floppy disk or tape and including OCR and telecommunications.This also means the ability to interface with today's most common computer equipment vendors both in teleprocessing protocol and in text format.-The system should faithfully reproduce the format of the source text in the target text.-The system should be capable of considering syntax, and the deep semantic issues involved in transferring meaning and nuance from one language to anothermuch more than just word equivalents.-The system should have multi-target capability, able to translate from a single source to a number of targets.-The system should be capable of learning new vocabulary, new semantically desirable transfers, and the technical jargon of new technologies.-The system should be capable of unlimited logical expansion in any direction. Dictionary transfers can be projected to millions of words, and semantic tables to many, many rules for each or any word.-The system should be able to translate the full range of expository textual material, technical documentation, and all types of reports and factual articles. It should not require input with an artificially restricted vocabulary.-There should be a technique wherein both users and the developer can improve the system through updates.An ideal system should not become a custom, stand-alone system soon after it is installed.While it must be able to incorporate the special requirements of each user so as to make them appear an integral part of the system, these adaptations must not prevent system enrichment from a central source.-The system must be cost-effective when compared with present manual efforts or with other MT systems.-The system should be friendly and easy to use in all aspects.In 1982, at Germany's Hanover Fair, the company announced its first commercial product: the Intelligent Translation System, with the Automatic Translator running in a microprocessor. The operating environment was and is the hardware and software of the Wang OIS 140/145 Office Information System. This has recently been extended to include the larger Wang VS System. Our first language pair was German-to-English, which we are demonstrating during this meeting. At the 1983 Hanover Fair, the company announced English-to-German and demonstrated that product, which is currently in field test.From a customer's point of view, this product is a language translator that operates on a word processor -and a standard word processor at that. And the market response has been enthusiastic, as you might expect.Let's examine this product in light of the desirable characteristics previously described...Logos executes its draft translation independently of the translator. In typical operating mode, the translator selects the text (or series of texts) to be translated from the word processing library. The ensuing Logos translation is fully automatic.The program returns each completed translation to the word processing text library for subsequent post-editing by the translator. The Logos translation can be considered as the first draft or as a quick scan for information. However, while Logos is broadly capable, and never forgets a transfer, it is not a skilled, professional human translator. Before a translated document is ready for publication and distribution, a competent translator must review it, and edit it using the word processing powers of the Wang system. The human translator is essentially the manager of this process -not a limiting variable.Being resident as an integrated function on this leading word processing system instantly provides Logos with all the capabilities associated with that system. Logos commands operate through standard word processing keyboards. Optical character readers, local network and long line telecommunications capabilities and special printers for output, are all standard attachments to the Logos/Wang systems. The integration of natural language translation with word processing is a powerful and natural combination of functions.To someone unfamiliar with translation, the format of a document may seem to be relatively unimportant when compared with the difficulties associated with producing an acceptable translation. The practical experience of Logos has proven this not to be true. In nearly every case translations need to be format-faithful to the source to be acceptable. Once again, Logos's integration into the word processing system has a significant advantage. The Wang WP system is already successfully interfaced to a wide variety of other manufacturers' systems and can maintain format control acceptable to the Logos translation system. Format integrity is neither simple nor easily achieved when one is transferring text from one system to another. It is further complicated when the systems are of different manufacture and when telecommunications are involved. Logos and Wang have successfully dealt with these issues.Logos is semantically strong; this strength is at the heart of our system, and is its measure.The Logos system can render different contextdependent translations for the same source word, depending on that word's specific use in each sentence, especially if it is a verb (or a word with a verb-derived or verb-related sense). This is one of the most powerful of the many Logos product features, and it is worth briefly looking inside the system -at what we call the Semantic Table. This is an ordered set of rules for resolving verbal constituents to their appropriate nuance, and then for transferring this nuance into an appropriate target language transfer. For example, there are rules in the Semantic Table handling over ninety senses of the verb 'kommen'.But the practical significance of semantic strength is simply its ability to get the right transfers. Because the more often we get the transfers right, the less editing time is needed to get the text right.A Logos translation out of one source language can be a translation into more than one target language. Additional programming is not necessary; all that is necessary is to have a multi-target data file that defines the additional target languages and appropriate dictionaries.Logos is capable of accepting new words into its dictionary as soon as they are identified as new. The process of describing new words to the permanent dictionary has been carefully designed to be simple and accurate. Logos's interactive process of dictionary enhancement is called ALEX™: the Automatic Lexicographer. No understanding of programming is needed to make effective use of ALEX. The user has only to answer a few questions put on the screen about the word he wants to add. The exact sequence of questions depends in part on the user's previous responses. ALEX relates the new words to words already in the system via analogy and user query responses. Unique semantic codes are created for the new entries and they are immediately entered, with the word, into the dictionary.The ability of a very large logically complicated system to remain flexible and expandable has increased significantly in recent years. The highly visible improvements in the price/performance of computer hardware have largely overshadowed the tremendous progress made in software architecture and design. Systems conceived and implemented fifteen or even ten years ago bear little resemblance to similar systems being implemented today. Logos makes extensive use of tables, a highly efficient database approach to dictionary management, and an operating system that enables efficient program execution on a standard word processing system. I might add that Logos's development effort utilises a mainframe whose memory has a capacity of 8 megabytes. It was no small accomplishment for the company's programming staff to convert Logos software to run in less than 1 per cent of the space of the originalthat is the 64 kilobytes of the Wang microprocessor. The conversion did not produce any degradation either in quality of translation or in any other feature of the system.It is the stated objective of Logos to translate the full range of informative material that is the appropriate source for MT. The present German-to-English general dictionary contains over 100,000 entries. In addition, there is provision for a large number of subject-matter-coded sub-dictionaries, for company-specific dictionaries, and for private-confidential dictionaries. The Logos approach to semantics will enable the system to continually improve its ability to translate complex non-technical material.Standard, yet customer-specific system The concept of a customised 'just for me' system may have initial appeal to the first-time user, but practical experience would quickly convince otherwise. A custom system requires custom support, and custom support is at best expensive, and at worst not always available when needed, price notwithstanding. On the other hand, a user has a right to expect his system to be responsive to his particular needs and as such act as if it is custom. The question is how to manage to be standard and yet appear to be customerspecific. Logos has done exactly that. The system consists of a core containing the translation code and appropriate interfaces. The Logos dictionary is a separate file to which the customer can add new words as well as new words supplied by Logos. Software releases for dictionary and linguistics improve the system without disturbing a customer's previous additions.Determining the cost-effectiveness of any computer application can be very complicated. But put simply, the translation system must have a cost advantage over the existing system. Computing this advantage is complex and requires the inclusion of all relevant costs. However, in most cases companies only look at the direct cost per line of internal and external translation.But translation can have an impact on a business far beyond the size of its operating budget. Consider that translation may be keeping a new product out of its foreign markets because of backlogged documentation. The inaccurate translation of a patent could cost a company millions; its accurate translation could save millions. The benefit of a general-purpose translation capability that is timely, accurate, flexible, and instantly available can be of great value to a multinational organisation. Logos on word processing offers such a capability.The terms 'friendly' and 'easy to use' are overused. I will stand on the widely recognised reputation of the host computers, the Wang OIS or VS, as being easy and efficient to use. Further I can assure you that all Logos screen functions have been carefully designed to be in harmony with the Wang command screens. We believe that Logos is exceptionally user-friendly.In the eleven-point description I have just given, I have several times referred to Logos's user -the translator. The translator who works with the system -initiates and schedules translations; -post-edits machine output; -interacts with the Logos dictionary.Translators schedule translations during the work-day, and Logos does them working day and night if necessary. The computer system can be hard at work translating in the background while other functions such as text entry, post-editing, new word entry or translation management are actively going on at each or any terminal. And a Logos system can have thirty or more workstations. Text to be translated can be entered at any WP station and edited at stations in the translation department.What has really happened is that the translation function and the translator have been brought into the mainstream of the organisation. Translation is no longer isolated. We believe that Logos increases the productivity of translators and thus their value to the company. Further, we believe that there is a large latent market for translation: translations that just aren't done at present because of cost, or translation backlog. How many copies were made when carbon paper had to be used? Xerographic technology dramatically increased duplication activity because it became easy to do. We believe Logos technology will do the same for translation, and position translators in the mainstream of the business.It is impossible to compress literally hundreds of man-years of effort into twenty minutes, and this paper gives only a glimpse of what we believe is the most advanced commercial language translation system in the world. As the new arrival on the machine translation scene we expect to be examined critically, and hopefully with interest.Ralph E. Hawes, Vice-President, Marketing, Logos Corporation, 100 Fifth Avenue, Waltham, MA 02154, USA.
null
null
null
null
Main paper: : I would like to introduce our company -Logos Corporationand its product, Logos™, the Intelligent Translation System™. I know that Logos is not a complete stranger to those of you who have been attending these conferences during the past several years. I searched past proceedings and came across a few brief references to it, but usually there were few details. I think people saw us as a special projects company, which had written translation programs for English to Vietnamese and English to Farsi.Our work in these and other languages like French, Russian and German helped us to develop our product and our company, which has become much more than a company for special language projects.Logos recognised that our hard-earned expertise in natural language processing coupled with the past decades' giant technological advances in computer price/performance provided us with a unique opportunity: to bring together a number of proven, powerful translation techniques plus some special ideas of our own, in a hardware-and-software environment that is economical and friendly. This blend of ingredients has resulted in our offering Logos, the Tools for the Trade, V. Lawson (ed,) , © Aslib and Ralph E. Hawes.Intelligent Translation System, on Wang, which we believe to be the world's leading word processing system. The topics that I will cover are:-The company -Logos Corporation -The characteristics of a desirable language translation system -Logos, the Intelligent Translation System.Ever since the ALPAC report in 1966, which discouraged efforts to produce workable machine translation (MT) of natural language, workers in MT have started only a few new projects (that is, until very recently). In general, their results have been widely reported. But one of the largest, and most sustained, efforts in this field has, until now, received little attention: the effort of Logos Corporation.Logos Corporation is an American company with a German subsidiary. Corporate and marketing headquarters are in the US, along with a development staff of about seventy people. Our current sales efforts are conducted from our sales office in Frankfurt, West Germany.The company was founded in 1969 by Bernard E. Scott, who is its president and principal linguist. Supported by private capital and development contracts, the company has always pursued a single goal: workable machine translation .In the 1970s, the company's best-known technical project was the production of English-to-Vietnamese translation for the US government. Subsequent projects entailed work in French, Russian, Farsi, Spanish and German. When we directed our efforts to developing the commercial Logos product, we were able to apply the lessons learned in these prior efforts, and concentrate on incorporating proved techniques while at the same time avoiding many traps and pitfalls.All our years of experience have contributed to the specification and development of the Intelligent Translation System, which incorporates a language-independent Automatic Translator™ of advanced capabilities.We believe the Logos system is the world's most advanced translation system, embodying a state-of-the-art translation program free from architectural or logical limitations, with unprecedented capability for translating accurately.Let's briefly examine the most desirable characteristics of a robust automated language translation system. To begin with:-The system should be capable of translating on its own, at computer speed, without operator intervention.-The system should be able to accept text to be translated in an economic, reliable, friendly manner. There should be several input options ranging from keyboard to floppy disk or tape and including OCR and telecommunications.This also means the ability to interface with today's most common computer equipment vendors both in teleprocessing protocol and in text format.-The system should faithfully reproduce the format of the source text in the target text.-The system should be capable of considering syntax, and the deep semantic issues involved in transferring meaning and nuance from one language to anothermuch more than just word equivalents.-The system should have multi-target capability, able to translate from a single source to a number of targets.-The system should be capable of learning new vocabulary, new semantically desirable transfers, and the technical jargon of new technologies.-The system should be capable of unlimited logical expansion in any direction. Dictionary transfers can be projected to millions of words, and semantic tables to many, many rules for each or any word.-The system should be able to translate the full range of expository textual material, technical documentation, and all types of reports and factual articles. It should not require input with an artificially restricted vocabulary.-There should be a technique wherein both users and the developer can improve the system through updates.An ideal system should not become a custom, stand-alone system soon after it is installed.While it must be able to incorporate the special requirements of each user so as to make them appear an integral part of the system, these adaptations must not prevent system enrichment from a central source.-The system must be cost-effective when compared with present manual efforts or with other MT systems.-The system should be friendly and easy to use in all aspects.In 1982, at Germany's Hanover Fair, the company announced its first commercial product: the Intelligent Translation System, with the Automatic Translator running in a microprocessor. The operating environment was and is the hardware and software of the Wang OIS 140/145 Office Information System. This has recently been extended to include the larger Wang VS System. Our first language pair was German-to-English, which we are demonstrating during this meeting. At the 1983 Hanover Fair, the company announced English-to-German and demonstrated that product, which is currently in field test.From a customer's point of view, this product is a language translator that operates on a word processor -and a standard word processor at that. And the market response has been enthusiastic, as you might expect.Let's examine this product in light of the desirable characteristics previously described...Logos executes its draft translation independently of the translator. In typical operating mode, the translator selects the text (or series of texts) to be translated from the word processing library. The ensuing Logos translation is fully automatic.The program returns each completed translation to the word processing text library for subsequent post-editing by the translator. The Logos translation can be considered as the first draft or as a quick scan for information. However, while Logos is broadly capable, and never forgets a transfer, it is not a skilled, professional human translator. Before a translated document is ready for publication and distribution, a competent translator must review it, and edit it using the word processing powers of the Wang system. The human translator is essentially the manager of this process -not a limiting variable.Being resident as an integrated function on this leading word processing system instantly provides Logos with all the capabilities associated with that system. Logos commands operate through standard word processing keyboards. Optical character readers, local network and long line telecommunications capabilities and special printers for output, are all standard attachments to the Logos/Wang systems. The integration of natural language translation with word processing is a powerful and natural combination of functions.To someone unfamiliar with translation, the format of a document may seem to be relatively unimportant when compared with the difficulties associated with producing an acceptable translation. The practical experience of Logos has proven this not to be true. In nearly every case translations need to be format-faithful to the source to be acceptable. Once again, Logos's integration into the word processing system has a significant advantage. The Wang WP system is already successfully interfaced to a wide variety of other manufacturers' systems and can maintain format control acceptable to the Logos translation system. Format integrity is neither simple nor easily achieved when one is transferring text from one system to another. It is further complicated when the systems are of different manufacture and when telecommunications are involved. Logos and Wang have successfully dealt with these issues.Logos is semantically strong; this strength is at the heart of our system, and is its measure.The Logos system can render different contextdependent translations for the same source word, depending on that word's specific use in each sentence, especially if it is a verb (or a word with a verb-derived or verb-related sense). This is one of the most powerful of the many Logos product features, and it is worth briefly looking inside the system -at what we call the Semantic Table. This is an ordered set of rules for resolving verbal constituents to their appropriate nuance, and then for transferring this nuance into an appropriate target language transfer. For example, there are rules in the Semantic Table handling over ninety senses of the verb 'kommen'.But the practical significance of semantic strength is simply its ability to get the right transfers. Because the more often we get the transfers right, the less editing time is needed to get the text right.A Logos translation out of one source language can be a translation into more than one target language. Additional programming is not necessary; all that is necessary is to have a multi-target data file that defines the additional target languages and appropriate dictionaries.Logos is capable of accepting new words into its dictionary as soon as they are identified as new. The process of describing new words to the permanent dictionary has been carefully designed to be simple and accurate. Logos's interactive process of dictionary enhancement is called ALEX™: the Automatic Lexicographer. No understanding of programming is needed to make effective use of ALEX. The user has only to answer a few questions put on the screen about the word he wants to add. The exact sequence of questions depends in part on the user's previous responses. ALEX relates the new words to words already in the system via analogy and user query responses. Unique semantic codes are created for the new entries and they are immediately entered, with the word, into the dictionary.The ability of a very large logically complicated system to remain flexible and expandable has increased significantly in recent years. The highly visible improvements in the price/performance of computer hardware have largely overshadowed the tremendous progress made in software architecture and design. Systems conceived and implemented fifteen or even ten years ago bear little resemblance to similar systems being implemented today. Logos makes extensive use of tables, a highly efficient database approach to dictionary management, and an operating system that enables efficient program execution on a standard word processing system. I might add that Logos's development effort utilises a mainframe whose memory has a capacity of 8 megabytes. It was no small accomplishment for the company's programming staff to convert Logos software to run in less than 1 per cent of the space of the originalthat is the 64 kilobytes of the Wang microprocessor. The conversion did not produce any degradation either in quality of translation or in any other feature of the system.It is the stated objective of Logos to translate the full range of informative material that is the appropriate source for MT. The present German-to-English general dictionary contains over 100,000 entries. In addition, there is provision for a large number of subject-matter-coded sub-dictionaries, for company-specific dictionaries, and for private-confidential dictionaries. The Logos approach to semantics will enable the system to continually improve its ability to translate complex non-technical material.Standard, yet customer-specific system The concept of a customised 'just for me' system may have initial appeal to the first-time user, but practical experience would quickly convince otherwise. A custom system requires custom support, and custom support is at best expensive, and at worst not always available when needed, price notwithstanding. On the other hand, a user has a right to expect his system to be responsive to his particular needs and as such act as if it is custom. The question is how to manage to be standard and yet appear to be customerspecific. Logos has done exactly that. The system consists of a core containing the translation code and appropriate interfaces. The Logos dictionary is a separate file to which the customer can add new words as well as new words supplied by Logos. Software releases for dictionary and linguistics improve the system without disturbing a customer's previous additions.Determining the cost-effectiveness of any computer application can be very complicated. But put simply, the translation system must have a cost advantage over the existing system. Computing this advantage is complex and requires the inclusion of all relevant costs. However, in most cases companies only look at the direct cost per line of internal and external translation.But translation can have an impact on a business far beyond the size of its operating budget. Consider that translation may be keeping a new product out of its foreign markets because of backlogged documentation. The inaccurate translation of a patent could cost a company millions; its accurate translation could save millions. The benefit of a general-purpose translation capability that is timely, accurate, flexible, and instantly available can be of great value to a multinational organisation. Logos on word processing offers such a capability.The terms 'friendly' and 'easy to use' are overused. I will stand on the widely recognised reputation of the host computers, the Wang OIS or VS, as being easy and efficient to use. Further I can assure you that all Logos screen functions have been carefully designed to be in harmony with the Wang command screens. We believe that Logos is exceptionally user-friendly.In the eleven-point description I have just given, I have several times referred to Logos's user -the translator. The translator who works with the system -initiates and schedules translations; -post-edits machine output; -interacts with the Logos dictionary.Translators schedule translations during the work-day, and Logos does them working day and night if necessary. The computer system can be hard at work translating in the background while other functions such as text entry, post-editing, new word entry or translation management are actively going on at each or any terminal. And a Logos system can have thirty or more workstations. Text to be translated can be entered at any WP station and edited at stations in the translation department.What has really happened is that the translation function and the translator have been brought into the mainstream of the organisation. Translation is no longer isolated. We believe that Logos increases the productivity of translators and thus their value to the company. Further, we believe that there is a large latent market for translation: translations that just aren't done at present because of cost, or translation backlog. How many copies were made when carbon paper had to be used? Xerographic technology dramatically increased duplication activity because it became easy to do. We believe Logos technology will do the same for translation, and position translators in the mainstream of the business.It is impossible to compress literally hundreds of man-years of effort into twenty minutes, and this paper gives only a glimpse of what we believe is the most advanced commercial language translation system in the world. As the new arrival on the machine translation scene we expect to be examined critically, and hopefully with interest.Ralph E. Hawes, Vice-President, Marketing, Logos Corporation, 100 Fifth Avenue, Waltham, MA 02154, USA. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
495
0.00202
null
null
null
null
null
null
null
null
b2bc2f04a407e3e2283f0b4327f289befef8b64a
237295825
null
Management of the machine translation environment: interaction of functions at the Pan {A}merican Health Organization
Spanish -English machine translation at the Pan American Health Organization (WHO regional office) has been fully operational since early 1980. The environment supports, at the same time: production, terminology retrieval, dictionary and program maintenance, and advanced development of a new system from English into Spanish. The interaction of these activities strengthens all of them mutually.
{ "name": [ "Vasconcellos, Muriel" ], "affiliation": [ null ] }
null
null
Proceedings of Translating and the Computer 5: Tools for the trade
1983-11-01
3
12
null
At the Pan American Health Organization (PAHO) we feel that a multifaceted working environment has contributed importantly to the progress of our work in machine translation. Our activity combines, at the same time, production for users, terminology work, dictionary development, enhancement of the current translation programme and development of a second system. Each of the components receives input and support from all the others. We are confident that this approach has been a major factor in the viability that we enjoy today.PAHO is the specialised international agency in the Americas that deals with public health, and as such it has a statutory role both within the Inter-American system and as part of the UN family, in which it serves as the regional office of the World Health Organization (WHO). The official languages are Spanish, English, Portuguese and French. The volume of human translation over the past five years has averaged 57 per cent into Spanish, 32 per cent into English, 9.4 per cent into Portuguese, and 1.6 per cent into French.In the mid-1970s the administrators at PAHO decided to look into machine translation as a means of reducing costs. Quantum advances in the speed, storage capacity, and efficiency of digital computers had made it seem reasonable to reconsider the possibility of mobilising them in the service of translation.A mainframe computer, then an IBM 360 with a disk operating system, was already in place at PAHO. Based on the results of a feasibility study, it was decided in 1975 to undertake work on a machine translation system that would run on this installation on a time-sharing basis.From the outset it was recognised that post-editing would be a necessity. This was a trade-off for the fact that the system would have to be able to deal with free syntax, with any vocabulary normally used in the Organization, and, ultimately, with as many different fields and genres of discourse as possible. No consideration was given to a mode of operation that would require pre-editing. The intention was to have a system that would mesh with the routine flow of text within the secretariat.With these criteria in mind, a team of consultants was contracted in 1976 to develop a system specially tailored to PAHO's needs. Of the two priority combinations, English-Spanish and Spanish-English, the latter was chosen as the first area of concentration. This combination requires fewer parsing strategies in order to produce manageable output, and at the time priority had to be given above all to setting up the architecture of the system and its extensive supporting software.The next three years were devoted to mounting this architecture and to writing the basic algorithm for translation from Spanish to English. At the end of that period there were twelve PL/1 programs in place performing a variety of tasks, including dictionary update, retrieval, and maintenance. It was also a dictionary-intensive period. In the beginning, the Georgetown methodology (1) was used for dictionary development: hand-coded entries were tied to glosses derived from twin-text concordances of a 40,000-word corpus of PAHO-specific running text. This approach yielded some 8,000 source entries with target equivalents. In order to test the system, however, it was decided to augment this core with multilingual lists of technical terms that were more superficially coded. By 1979 the combined dictionaries came to a level of some 48,000 entries. More than half the total corresponded to terms in the health and biomedical fields, the remainder being general vocabulary.Toward the end of this initial period, work with the dictionaries was greatly facilitated by the development of mnemonic, user-friendly software for updating, for sideby-side printing, and for the retrieval of individual records. These were the first collaborative undertakings in which PAHO staff provided feedback and 'wish lists' of features that would be desirable. The translation algorithm by that time could produce primitive output. There were basic routines for disambiguating part-of-speech homographs, which provided for the possibility of a source word being any combination of noun, verb, or adjective. Idioms could be looked up as units as long as they were fixed strings. Noun phrases were recognised and rearranged in target order. Partial groundwork had been laid for prepositional government. A few lexical routines had been written directly into the program. Rudimentary operations could be performed on the verb string in the third person of the present tense, although it was necessary to have all verb inflections in full form in the source dictionary, and subject pronouns absent from the original Spanish text were inserted in specific environments. It was a fully impacted system, and the programs were not yet modular.At the time the only mode of input was punched cards. For this reason more than any other, production had not yet been seriously considered. But the picture was to change dramatically at the end of 1979. In November of that year a full-time computational linguist was assigned to the project's regular staff, and shortly thereafter a telecommunication interface was established between the mainframe computer and the Organization's word processing system (then a Wang WPS 30). Thus the word processor was enabled as a remote job entry terminal for sending batch translation jobs to the computer and receiving them back again. It was no longer necessary to have a text specially keyboarded for machine translation. A conversion program was written which interprets for purposes of MT any text prepared in a normal layout using standard typing conventions. Mainly, it recognises format and distinguishes facultative punctuation (capitalisation, full stops, and hyphens) from forms permanently stored in the dictionary. From the time this program was installed, any Spanish text keyed on the Wang system, regardless of the purpose for which it had originally been entered, was available for machine translation. The word processing interface also gave us a powerful tool at the output end. Thanks to the string manipulation features available on the Wang, post-editing on-screen became an easy task from the mechanical standpoint.It was this combination -the availability of a staff computational linguist in-house and the possibility of sending and receiving text on the word processor -that provided the stimulus for going into production.Regular use of the system gave it identity, and soon it was baptised Spanam -'Span' for Spanish, and 'am' for the American Region of WHO. For another year the PAHO staff worked side-by-side with one of the consultants. Inspired by the process of ongoing production, PAHO began to specify the improvements in the algorithm that would have the greatest impact on translation. In response to our recommendations, the following improvements were made: verb synthesis was expanded to include all tenses; verb string manipulation was improved; features were added which permitted the disambiguation of pronouns; idioms were made inflectable; prepositional government was extended in both directions and to various parts of speech; homograph routines were expanded; the noun-phrase patterns were revamped; and the program was modified so as to take orthographic accents (which had not been included up to then) into account. Also during this period a start was made on reorganising the program into a modular structure. This would make it possible to carry on with production while improvements were being made in specific areas of the system.Gradually the computational linguist became familiar with the system software. By mid-1980 she had completed the first major improvement done independently by PAHO inhouse staff, namely the morphological lookup for verbs. Without this development, large-scale production would never have been feasible. Before, when verbs had to be entered in their full form, it often happened that the main verb of a sentence was not found in the dictionary, with the result that the analysis routines were disrupted. After the installation of verb morphology, the incidence of not-found words dropped to less than one per cent, and these in general are not crucial to the structure of the sentence. They are apt to be proper names, abbreviations, Latin terms, and nonceformations, and in certain environments the system assumes that they are nouns. More recently, several features have been introduced for gap analysis: hyphenated words can now be broken down and dealt with in terms of their components, and the program utilises information from certain prefixes and suffixes. Other improvements added in 1980 included additional work on verb synthesis (in particular on Spanish verb forms occurring in association with the particle se, for which a number of treatments are now available) and extension of the maximum length of a source dictionary entry from five words to twenty-five. On a more general level, further streamlining was done to the program, particularly with a view to making the modules watertight. The structure, as it now stands, is shown in Figure 1 .Starting in mid-1980, production began to steadily gain momentum. People on the staff would hear about Spanam either by word of mouth or from our programme of demonstrations (always on random text), which continues to this day. As our facilities improved, we would establish contact with offices in PAHO where we felt that a particular application might be especially appropriate. For the most part, however, it is the users who have come to us.In our first major project, the Organization's biennial budget document, we were able to demonstrate a saving in the cost of its translation of 61 per cent and a reduction in staff-days of 45 per cent. The success of this project attracted other users and launched us on our way.By early 1981 another option became available which potentially would also facilitate production. PAHO's optical character reader, a Compuscan Alphaword II which until that time had been reserved for the transmission of telexes, was interfaced with the word processor. Thus our full configuration includes the OCR (Figure 2) . Also, the Wang was upgraded to an OIS/140.In the last two and a half years we have processed texts in a wide range of fields and for various purposes. Our actual daily average per post-editor, with other duties included, comes to about 6,500 words, and we have been able to post-edit as much as 11,000 words in a day. New software put into use at the end of September 1983 eliminates several housekeeping tasks which previously represented a time overhead of about 20 per cent. This means that the post-editor is able to devote full time to the text, and, with other recent improvements, it should be possible to bring our daily average closer to a consistent level of 8,000 to 10,000 words.We are constantly developing new techniques and devices for speeding up the process of post-editing. At the cerebral level, we have amassed a bag of tricks for making fewer and more strategic changes in the text. Research time has been cut down by the introduction of reliability marks on all preferred terminology that is found by the dictionary. And at the mechanical end, on the word processor we have designed a series of string functions specially for dealing with English MT output. We try to develop anything that might reduce the work of post-editing, at whatever level the job can be done most efficiently. This focus, we feel, is much more cost-effective than an exclusive preoccupation with errors that may be generated by the algorithm.The finished product is delivered by informing the user that the translation is available on the word processing system. Each page of the header bears the words POST-EDITED MACHINE TRANSLATION, and at the end of the document there is a message that reads: THE FORE-GOING TEXT IS A POST-EDITED MACHINE TRANSLATION.Usually our office assumes responsibility for the postediting. Starting with the budget document, which was a large project, it became evident that we would need someone on the staff with experience in translation who would postedit and manage the flow of production. The position was created and it has been filled by a trained translator. During slack periods, this person also works on the dictionaries. Sometimes we have given raw, or nearly raw, output to editors or technical writers who were interested only in having a rough draft to work from -and even, on occasion, to other professional translators.*We do find that it is far more efficient to post-edit on-screen, and for this reason we prefer to deal with users who will be doing the same. The entry of hand-written corrections from hard copy constitutes an extra step which we would prefer to avoid.Often the only hard copy that we see is the side-byside output (Spanish on the left, English on the right), printed either on the mainframe computer or the Wang (Figure 3) , which we use for guidance purposes during post-editing. As we work, we make note on this copy of any changes that may be needed in the dictionary -changes in the glosses, candidates for micro-glossary treatment, idioms to be introduced, etc. By marking the changes to be made at the time of post-editing, we are able to accelerate the dictionary work. This is an important point at which the functions of our team intersect.The combined experience of development and production enabled us to build the Spanam source dictionary to a level of more than 56,000 entries as of September 1983. Of this total, 94 per cent are bases or stems ('split forms') and 6 per cent are full forms. Although the incidence of notfound words in the output is minimal, we continue constantly * In December 1984 the Director of PAHO announced a merger of the human and machine translation services. Micro-glossaries have been added in order to deal with terms that have different meanings in different disciplines. The micro-glossaries also make it possible for us to attend to the wishes of users who provide us with feedback on their preferred terminology when this differs from glosses that we need to have in the main dictionary. Another feature is the possibility of specifying that preferred and reliable terminology be so marked in the output. These features are an outgrowth of the fact that our office is also responsible for the co-ordination of terminology at PAHO.Within the year we expect to have installed on the Wang OIS/140 a database of biomedical terminology, Whoterm, which has been developed for the Organization's internal use by WHO/Geneva.(2) To the extent feasible, entries from Whoterm will be incorporated into the MT dictionaries. Thus, when a reliability mark appears in the output, the post-editor will be able to check its definition while remaining at the same workstation.Entries from the MT dictionaries can be consulted on the word processor as well as at the computer terminal. In effect, the retrieval is a selective print using the same software that prints the hard copies which we use for purposes of consultation and development.Updating of the dictionaries is also done both at the word processor and the computer terminal, as well as with typewriter and paper, for submission via OCR. The update program is user-friendly in more than one sense: descriptors are mnemonic and, in addition, many defaults are built in. As a result, updating goes quite fast. The update run is always submitted as a batch job, regardless of the mode of input.Again, as with our other activities, work on the dictionary involves the rest of the system as well. On the one hand, experience from production suggests what is needed, and on the other, problems that appear at first to be at the level of the dictionary may turn out to require adjustments in the algorithm. Or a series of examples, after being worked with in context, may inspire a long-sought solution or a new approach.Our growth with Spanam prepared us for the challenge of building a system from English into Spanish. The importance of a combined working mode that would permit the upgrading of Spanam while at the same time starting on the new system, Engspan, was emphasised in evaluations of the system in early 1981. Two separate evaluations were done by Professors Ross Macdonald and Michael Zarechnak of Georgetown University. Both Dr Macdonald and Dr Zarechnak pointed out that insights gained in each endeavour would contribute to the other. And this in fact has proven to be the case.The chief priorities isolated by the consultants entailed the need to deepen the dictionary coding and to expand the parsing routines. Their recommendations were essential for a system from English into Spanish, but they also applied to improvement of the existing system. Spanam was to continue operating, but features of the new system, as they became available, could be incorporated. Thus the dictionary record, which is common to the two systems, has been expanded to permit extensive possibilities for syntactic and semantic coding that had not existed before. Where there had originally been 82 fields there are now 211. It is an evolutionary process. The new fields are added by turning bytes from the original record into bits. Accordingly, there is much space left to draw on, and further reorganisation is anticipated. The new codes (Table 1) are available to both systems. Also, a greatly improved method for the handling of discontinuous idioms will soon be available. And most important, the parsing of Spanam will benefit from the expanded strategies being developed for Engspan through the use of augmented transition networks based on a slot-and-filler-type grammar. (3) Spanam/Engspan operate today using a total of sixteen PL/1 programs -six major ones (Table 2 ) and ten external procedures. They all run on the mainframe, now an IBM 4341. Of thirteen programs delivered to PAHO in 1980, eight are still in use, although they have evolved considerably with the changing environment.The development of Engspan actually began in late 1981, and as of September 1983 the English source dictionary had approximately 40,000 entries, most of them already tied to appropriate equivalents in the Spanish target. The algorithm has a lemmatisation module, a lookup for single words and idioms, routines for resolving a limited number of homograph types, a module for recognising and synthesising noun phrases, and a complete procedure for the synthesis of inflected Spanish verb forms in all tenses and moods for the first and third persons. There is a working corpus of 50,000 running words. Test translations already give promising results.Engspan recently received a mandate in the form of a grant from the US Agency for International Development (US AID) for the two-year period beginning August 1983. A second full-time computational linguist has been contracted. Work is focusing on selected aspects of noun phrase analysis, verb selectional restrictions, and clause-level parsing, which are expected to produce the highest yield in terms of impact on translation. The dictionary workmainly in-depth coding of existing entries -is to accompany the new developments as required.Toward the end of the grant period, an evaluative study will address the possibility of Engspan being adapted to a mini-or a microcomputer.At all points the work of Spanam and Engspan is closely interrelated, and work goes on simultaneously in every area. The job that lies ahead, I believe, can best be tackled in an environment, such as the one described here, which brings all the phases together under a single roof -postediting by professional translators, terminology work, dictionary-building, and system refinement. A high degree of interaction with the output is an important factor in the further enhancement of operational systems. This is particularly so in the new era that we are entering on in machine translation. As Professor Yorick Wilks has said, 'there is now no place left for the endlessly diverting question of whether MT is possible or not; it is clearly so'.(4) The current challenge is to whittle away at the remaining inefficiencies in the day-to-day working environment, attacking them at whatever level is most effective. The need now is for a sustained problem-solving effort, always creative and taking advantage of technological innovation as it becomes available and linguistic insights as they become known.whose sudden death on 16 June 1983 was a very sad blow.Within the structure at PAHO, we are indebted to our supervisor, Luis Larrea Alba, Jr, Chief of General Services, for his support since the beginning, and to Dr Charles L. Williams, Jr, Deputy Director of the Organization until his retirement in 1979, without whose vision machine translation would never have come to PAHO.
null
null
null
null
Main paper: introduction: At the Pan American Health Organization (PAHO) we feel that a multifaceted working environment has contributed importantly to the progress of our work in machine translation. Our activity combines, at the same time, production for users, terminology work, dictionary development, enhancement of the current translation programme and development of a second system. Each of the components receives input and support from all the others. We are confident that this approach has been a major factor in the viability that we enjoy today.PAHO is the specialised international agency in the Americas that deals with public health, and as such it has a statutory role both within the Inter-American system and as part of the UN family, in which it serves as the regional office of the World Health Organization (WHO). The official languages are Spanish, English, Portuguese and French. The volume of human translation over the past five years has averaged 57 per cent into Spanish, 32 per cent into English, 9.4 per cent into Portuguese, and 1.6 per cent into French.In the mid-1970s the administrators at PAHO decided to look into machine translation as a means of reducing costs. Quantum advances in the speed, storage capacity, and efficiency of digital computers had made it seem reasonable to reconsider the possibility of mobilising them in the service of translation.A mainframe computer, then an IBM 360 with a disk operating system, was already in place at PAHO. Based on the results of a feasibility study, it was decided in 1975 to undertake work on a machine translation system that would run on this installation on a time-sharing basis.From the outset it was recognised that post-editing would be a necessity. This was a trade-off for the fact that the system would have to be able to deal with free syntax, with any vocabulary normally used in the Organization, and, ultimately, with as many different fields and genres of discourse as possible. No consideration was given to a mode of operation that would require pre-editing. The intention was to have a system that would mesh with the routine flow of text within the secretariat.With these criteria in mind, a team of consultants was contracted in 1976 to develop a system specially tailored to PAHO's needs. Of the two priority combinations, English-Spanish and Spanish-English, the latter was chosen as the first area of concentration. This combination requires fewer parsing strategies in order to produce manageable output, and at the time priority had to be given above all to setting up the architecture of the system and its extensive supporting software.The next three years were devoted to mounting this architecture and to writing the basic algorithm for translation from Spanish to English. At the end of that period there were twelve PL/1 programs in place performing a variety of tasks, including dictionary update, retrieval, and maintenance. It was also a dictionary-intensive period. In the beginning, the Georgetown methodology (1) was used for dictionary development: hand-coded entries were tied to glosses derived from twin-text concordances of a 40,000-word corpus of PAHO-specific running text. This approach yielded some 8,000 source entries with target equivalents. In order to test the system, however, it was decided to augment this core with multilingual lists of technical terms that were more superficially coded. By 1979 the combined dictionaries came to a level of some 48,000 entries. More than half the total corresponded to terms in the health and biomedical fields, the remainder being general vocabulary.Toward the end of this initial period, work with the dictionaries was greatly facilitated by the development of mnemonic, user-friendly software for updating, for sideby-side printing, and for the retrieval of individual records. These were the first collaborative undertakings in which PAHO staff provided feedback and 'wish lists' of features that would be desirable. The translation algorithm by that time could produce primitive output. There were basic routines for disambiguating part-of-speech homographs, which provided for the possibility of a source word being any combination of noun, verb, or adjective. Idioms could be looked up as units as long as they were fixed strings. Noun phrases were recognised and rearranged in target order. Partial groundwork had been laid for prepositional government. A few lexical routines had been written directly into the program. Rudimentary operations could be performed on the verb string in the third person of the present tense, although it was necessary to have all verb inflections in full form in the source dictionary, and subject pronouns absent from the original Spanish text were inserted in specific environments. It was a fully impacted system, and the programs were not yet modular.At the time the only mode of input was punched cards. For this reason more than any other, production had not yet been seriously considered. But the picture was to change dramatically at the end of 1979. In November of that year a full-time computational linguist was assigned to the project's regular staff, and shortly thereafter a telecommunication interface was established between the mainframe computer and the Organization's word processing system (then a Wang WPS 30). Thus the word processor was enabled as a remote job entry terminal for sending batch translation jobs to the computer and receiving them back again. It was no longer necessary to have a text specially keyboarded for machine translation. A conversion program was written which interprets for purposes of MT any text prepared in a normal layout using standard typing conventions. Mainly, it recognises format and distinguishes facultative punctuation (capitalisation, full stops, and hyphens) from forms permanently stored in the dictionary. From the time this program was installed, any Spanish text keyed on the Wang system, regardless of the purpose for which it had originally been entered, was available for machine translation. The word processing interface also gave us a powerful tool at the output end. Thanks to the string manipulation features available on the Wang, post-editing on-screen became an easy task from the mechanical standpoint.It was this combination -the availability of a staff computational linguist in-house and the possibility of sending and receiving text on the word processor -that provided the stimulus for going into production.Regular use of the system gave it identity, and soon it was baptised Spanam -'Span' for Spanish, and 'am' for the American Region of WHO. For another year the PAHO staff worked side-by-side with one of the consultants. Inspired by the process of ongoing production, PAHO began to specify the improvements in the algorithm that would have the greatest impact on translation. In response to our recommendations, the following improvements were made: verb synthesis was expanded to include all tenses; verb string manipulation was improved; features were added which permitted the disambiguation of pronouns; idioms were made inflectable; prepositional government was extended in both directions and to various parts of speech; homograph routines were expanded; the noun-phrase patterns were revamped; and the program was modified so as to take orthographic accents (which had not been included up to then) into account. Also during this period a start was made on reorganising the program into a modular structure. This would make it possible to carry on with production while improvements were being made in specific areas of the system.Gradually the computational linguist became familiar with the system software. By mid-1980 she had completed the first major improvement done independently by PAHO inhouse staff, namely the morphological lookup for verbs. Without this development, large-scale production would never have been feasible. Before, when verbs had to be entered in their full form, it often happened that the main verb of a sentence was not found in the dictionary, with the result that the analysis routines were disrupted. After the installation of verb morphology, the incidence of not-found words dropped to less than one per cent, and these in general are not crucial to the structure of the sentence. They are apt to be proper names, abbreviations, Latin terms, and nonceformations, and in certain environments the system assumes that they are nouns. More recently, several features have been introduced for gap analysis: hyphenated words can now be broken down and dealt with in terms of their components, and the program utilises information from certain prefixes and suffixes. Other improvements added in 1980 included additional work on verb synthesis (in particular on Spanish verb forms occurring in association with the particle se, for which a number of treatments are now available) and extension of the maximum length of a source dictionary entry from five words to twenty-five. On a more general level, further streamlining was done to the program, particularly with a view to making the modules watertight. The structure, as it now stands, is shown in Figure 1 .Starting in mid-1980, production began to steadily gain momentum. People on the staff would hear about Spanam either by word of mouth or from our programme of demonstrations (always on random text), which continues to this day. As our facilities improved, we would establish contact with offices in PAHO where we felt that a particular application might be especially appropriate. For the most part, however, it is the users who have come to us.In our first major project, the Organization's biennial budget document, we were able to demonstrate a saving in the cost of its translation of 61 per cent and a reduction in staff-days of 45 per cent. The success of this project attracted other users and launched us on our way.By early 1981 another option became available which potentially would also facilitate production. PAHO's optical character reader, a Compuscan Alphaword II which until that time had been reserved for the transmission of telexes, was interfaced with the word processor. Thus our full configuration includes the OCR (Figure 2) . Also, the Wang was upgraded to an OIS/140.In the last two and a half years we have processed texts in a wide range of fields and for various purposes. Our actual daily average per post-editor, with other duties included, comes to about 6,500 words, and we have been able to post-edit as much as 11,000 words in a day. New software put into use at the end of September 1983 eliminates several housekeeping tasks which previously represented a time overhead of about 20 per cent. This means that the post-editor is able to devote full time to the text, and, with other recent improvements, it should be possible to bring our daily average closer to a consistent level of 8,000 to 10,000 words.We are constantly developing new techniques and devices for speeding up the process of post-editing. At the cerebral level, we have amassed a bag of tricks for making fewer and more strategic changes in the text. Research time has been cut down by the introduction of reliability marks on all preferred terminology that is found by the dictionary. And at the mechanical end, on the word processor we have designed a series of string functions specially for dealing with English MT output. We try to develop anything that might reduce the work of post-editing, at whatever level the job can be done most efficiently. This focus, we feel, is much more cost-effective than an exclusive preoccupation with errors that may be generated by the algorithm.The finished product is delivered by informing the user that the translation is available on the word processing system. Each page of the header bears the words POST-EDITED MACHINE TRANSLATION, and at the end of the document there is a message that reads: THE FORE-GOING TEXT IS A POST-EDITED MACHINE TRANSLATION.Usually our office assumes responsibility for the postediting. Starting with the budget document, which was a large project, it became evident that we would need someone on the staff with experience in translation who would postedit and manage the flow of production. The position was created and it has been filled by a trained translator. During slack periods, this person also works on the dictionaries. Sometimes we have given raw, or nearly raw, output to editors or technical writers who were interested only in having a rough draft to work from -and even, on occasion, to other professional translators.*We do find that it is far more efficient to post-edit on-screen, and for this reason we prefer to deal with users who will be doing the same. The entry of hand-written corrections from hard copy constitutes an extra step which we would prefer to avoid.Often the only hard copy that we see is the side-byside output (Spanish on the left, English on the right), printed either on the mainframe computer or the Wang (Figure 3) , which we use for guidance purposes during post-editing. As we work, we make note on this copy of any changes that may be needed in the dictionary -changes in the glosses, candidates for micro-glossary treatment, idioms to be introduced, etc. By marking the changes to be made at the time of post-editing, we are able to accelerate the dictionary work. This is an important point at which the functions of our team intersect.The combined experience of development and production enabled us to build the Spanam source dictionary to a level of more than 56,000 entries as of September 1983. Of this total, 94 per cent are bases or stems ('split forms') and 6 per cent are full forms. Although the incidence of notfound words in the output is minimal, we continue constantly * In December 1984 the Director of PAHO announced a merger of the human and machine translation services. Micro-glossaries have been added in order to deal with terms that have different meanings in different disciplines. The micro-glossaries also make it possible for us to attend to the wishes of users who provide us with feedback on their preferred terminology when this differs from glosses that we need to have in the main dictionary. Another feature is the possibility of specifying that preferred and reliable terminology be so marked in the output. These features are an outgrowth of the fact that our office is also responsible for the co-ordination of terminology at PAHO.Within the year we expect to have installed on the Wang OIS/140 a database of biomedical terminology, Whoterm, which has been developed for the Organization's internal use by WHO/Geneva.(2) To the extent feasible, entries from Whoterm will be incorporated into the MT dictionaries. Thus, when a reliability mark appears in the output, the post-editor will be able to check its definition while remaining at the same workstation.Entries from the MT dictionaries can be consulted on the word processor as well as at the computer terminal. In effect, the retrieval is a selective print using the same software that prints the hard copies which we use for purposes of consultation and development.Updating of the dictionaries is also done both at the word processor and the computer terminal, as well as with typewriter and paper, for submission via OCR. The update program is user-friendly in more than one sense: descriptors are mnemonic and, in addition, many defaults are built in. As a result, updating goes quite fast. The update run is always submitted as a batch job, regardless of the mode of input.Again, as with our other activities, work on the dictionary involves the rest of the system as well. On the one hand, experience from production suggests what is needed, and on the other, problems that appear at first to be at the level of the dictionary may turn out to require adjustments in the algorithm. Or a series of examples, after being worked with in context, may inspire a long-sought solution or a new approach.Our growth with Spanam prepared us for the challenge of building a system from English into Spanish. The importance of a combined working mode that would permit the upgrading of Spanam while at the same time starting on the new system, Engspan, was emphasised in evaluations of the system in early 1981. Two separate evaluations were done by Professors Ross Macdonald and Michael Zarechnak of Georgetown University. Both Dr Macdonald and Dr Zarechnak pointed out that insights gained in each endeavour would contribute to the other. And this in fact has proven to be the case.The chief priorities isolated by the consultants entailed the need to deepen the dictionary coding and to expand the parsing routines. Their recommendations were essential for a system from English into Spanish, but they also applied to improvement of the existing system. Spanam was to continue operating, but features of the new system, as they became available, could be incorporated. Thus the dictionary record, which is common to the two systems, has been expanded to permit extensive possibilities for syntactic and semantic coding that had not existed before. Where there had originally been 82 fields there are now 211. It is an evolutionary process. The new fields are added by turning bytes from the original record into bits. Accordingly, there is much space left to draw on, and further reorganisation is anticipated. The new codes (Table 1) are available to both systems. Also, a greatly improved method for the handling of discontinuous idioms will soon be available. And most important, the parsing of Spanam will benefit from the expanded strategies being developed for Engspan through the use of augmented transition networks based on a slot-and-filler-type grammar. (3) Spanam/Engspan operate today using a total of sixteen PL/1 programs -six major ones (Table 2 ) and ten external procedures. They all run on the mainframe, now an IBM 4341. Of thirteen programs delivered to PAHO in 1980, eight are still in use, although they have evolved considerably with the changing environment.The development of Engspan actually began in late 1981, and as of September 1983 the English source dictionary had approximately 40,000 entries, most of them already tied to appropriate equivalents in the Spanish target. The algorithm has a lemmatisation module, a lookup for single words and idioms, routines for resolving a limited number of homograph types, a module for recognising and synthesising noun phrases, and a complete procedure for the synthesis of inflected Spanish verb forms in all tenses and moods for the first and third persons. There is a working corpus of 50,000 running words. Test translations already give promising results.Engspan recently received a mandate in the form of a grant from the US Agency for International Development (US AID) for the two-year period beginning August 1983. A second full-time computational linguist has been contracted. Work is focusing on selected aspects of noun phrase analysis, verb selectional restrictions, and clause-level parsing, which are expected to produce the highest yield in terms of impact on translation. The dictionary workmainly in-depth coding of existing entries -is to accompany the new developments as required.Toward the end of the grant period, an evaluative study will address the possibility of Engspan being adapted to a mini-or a microcomputer.At all points the work of Spanam and Engspan is closely interrelated, and work goes on simultaneously in every area. The job that lies ahead, I believe, can best be tackled in an environment, such as the one described here, which brings all the phases together under a single roof -postediting by professional translators, terminology work, dictionary-building, and system refinement. A high degree of interaction with the output is an important factor in the further enhancement of operational systems. This is particularly so in the new era that we are entering on in machine translation. As Professor Yorick Wilks has said, 'there is now no place left for the endlessly diverting question of whether MT is possible or not; it is clearly so'.(4) The current challenge is to whittle away at the remaining inefficiencies in the day-to-day working environment, attacking them at whatever level is most effective. The need now is for a sustained problem-solving effort, always creative and taking advantage of technological innovation as it becomes available and linguistic insights as they become known.whose sudden death on 16 June 1983 was a very sad blow.Within the structure at PAHO, we are indebted to our supervisor, Luis Larrea Alba, Jr, Chief of General Services, for his support since the beginning, and to Dr Charles L. Williams, Jr, Deputy Director of the Organization until his retirement in 1979, without whose vision machine translation would never have come to PAHO. Appendix:
null
null
null
null
{ "paperhash": [ "bates|language_as_a_cognitive_process", "sereda|practical_experience_of_machine_translation" ], "title": [ "Language as a Cognitive Process", "Practical experience of machine translation" ], "abstract": [ "Books reviewed in the AJCL will be those of interest to computat ional linguists; books in closely related disciplines may also be considered. The purpose of a book review is to inform readers about the content of the book and to present opinions on the choice of material, manner of presentat ion, and suitability for various readers and purposes. There is no limit to the length of reviews. The appropriate length is determined by its content. If you wish to review a specific book, please contact me before doing so to check that it is not already under review by someone else. If you want to be on a list of potential reviewers, please send me your name and mailing address together with a list of keywords summarizing your areas of interest. You can also suggest books to be reviewed without volunteering to be the reviewer.", "Post-editing is one of the most significant factors in the operation of a computer translation system. The economic validity of computer translation stands or falls on the efficiency and success of the post-editing process. The factors affecting the post-editing functions include the linguistic performance of the system, the quality of the source text, availability of terminology, capabilities of the personnel and mechanical aspect of the translation process." ], "authors": [ { "name": [ "Lyn Bates", "T. Winograd" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. P. Sereda" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "2209224", "61069362" ], "intents": [ [], [] ], "isInfluential": [ false, false ] }
Problem: The paper addresses the development and operationalization of machine translation systems at the Pan American Health Organization (PAHO) for Spanish-English translation. Solution: The hypothesis posits that a multifaceted working environment, combining production, terminology work, dictionary development, and system enhancement, contributes significantly to the progress and viability of machine translation systems at PAHO.
495
0.024242
null
null
null
null
null
null
null
null
db2d0d657c392bf1806740b93a4fff4970c821a9
59916008
null
Machine translation, machine-aided translation, and machine-impeded translation
The paper describes the general philosophy behind the range of translation aids developed by Automated Language Processing Systems, which include interactive machine translation.
{ "name": [ "Tenney, Merle D." ], "affiliation": [ null ] }
null
null
Proceedings of Translating and the Computer 5: Tools for the trade
1983-11-01
9
1
null
In the more than thirty years which have transpired since Warren Weaver circulated his now famous memo 'Translation', a great many computer systems have been proposed for dealing with the problem of translating between human languages. Many of these systems evidence a real understanding of the ways in which the computer can best be brought to bear in support of the translation process. They are an aid to translation. Other systems are insensitive to the abilities and shortcomings of both man and machine. In spite of the best intentions of their designers, they are often an impediment to translation.The aptness of a translation system is a relative thingrelative to the text to be translated, the needs of the intended audience, and the requirements of the organisation providing the translation, among other considerations. It follows that a general translation system must incorporate a variety of translation aids to match the multiplicity of translation requirements.There is a broad continuum of ways in which man and machine can share the translation responsibility. It ranges from Bar-Hillel's FAHQT (Fully Automatic, High Quality It is generally acknowledged that FAHQT does not exist today. As a result, many people have felt at a loss to describe existing automatic translation systems, all of which require some human intervention to produce high-quality translation. Terms such as 'semi-automatic', 'machine-aided', 'automatised', and 'traditional MT' have been proposed by various observers to refer to this class of translation systems. The only term which has rivalled the standard appellation, 'machine translation', with any degree of success has been 'machine-aided translation'. Unfortunately, this term is taken by many to refer to MAHT and HAMT noted above. The initialisms HTLGI, MAHT, HAMT, and FAHQT (or FAMT, for 'Fully Automatic Machine Translation') have a fairly standard interpretation, but they are hardly transparent to the newcomer to the field of translation technology.In an effort to cut through some of the terminological confusion (and at the risk of compounding it further), may I propose the following straightforward descriptions of the four major points in the continuum: 'writing aids', 'translation aids', 'interactive translation', and 'automatic translation'. A brief description of each will serve to clarify its meaning.'Writing aids' refers to a set of monolingual programs and reference files made available to a writer to help him compose or edit a document. The most basic writing aid is a good word processor. Other writing aids range from spelling and punctuation checkers to style and readability analysers. Inasmuch as translation is a special case of writing, translators can profit from having access to tools which help them write better.'Translation aids' refers to a variety of bilingual and multilingual aids which stop short of proposing whole sentence translations. They include such diverse aids as term bank systems and systems for spotting existing translations. Translation aids have proven effective in increasing translator productivity, quality, and satisfaction.'Interactive translation' refers to any system in which the computer produces a translation of complete sentences under the interactive guidance of a human operator. It differs from translation aids in that with translation aids the human assumes the primary role in producing a translation, whereas with interactive translation the computer takes the lead.Interactive translation differs from automatic translation in its provision for consulting with a human operator during the translation process.'Automatic translation' refers to any system in which the computer produces a translation of complete sentences based entirely on its own resources (algorithms, grammars, and dictionaries). The fact that texts which are translated with an automatic translation system may be subject to pre-editing or post-editing does not make the translation system itself any less automatic.The notion that the range of useful machine aids to translation encompasses more than automatic translation is not new. It was one of the major conclusions of the ALPAC report in 1966.In 1976 Bernard Vauquois made the following recommendation: 'Consider now the feasibility of A.T. systems which merge human translators and the computer in a hybrid process. We can imagine several different strategies',(1) whereupon he gave a brief description of a pre-edit/post-edit system and an interactive system, calling the latter 'the ideal way for the future'.In 1979 an international committee of experts in machine translation gathered in Belgium under the auspices of the International Federation for Documentation (FID) and the International Association of Applied Linguistics (AILA). Donald Walker and Hans Karlgren reported this conclusion reached by the committee: 'Encouraging developments are expected in the area of refined combinations of machine and human cooperation, rather than attempts at complete automatization. Mere post-editing of machine output does not seem to be a realistic way of producing adequate translation'. (2) In 1980 Martin Kay published his marvellous essay, 'The proper place of men and machines in language translation'. In it he stated:The need for translated texts will not be filled by a program of research that devotes all of its resources to a distant ideal, and linguists and computer experts will be denied the proper rewards of their labors if they must promise to reach the ideal by some specific time. A healthy climate for FAHQT will be one in which a variety of different though related goals are being pursued with equal vigor for the intellectual and practical benefits that they may bring. (3)In 1981 at a workshop on 'Applied computational linguistics in perspective', a panel on machine translation, chaired by Martin Kay, based its recommendations on this observation:The translation problem is real and will in fact rapidly reach crisis proportions unless some action is taken... The only hope for a thoroughgoing solution seems to lie with technology. But this is not to say that there is only one solution, namely machine translation, in the classical sense of a fully automatic procedure that carries a text from one language to another with human intervention only in the final revision. There is, in fact, a continuum of ways in which technology could be brought to bear, with fully automatic translation at one extreme, and word processing equipment and dictating machines at the other. 4In 1982, at COLING 82 held in Prague, Alan Melby raised the issue once more:It is now quite respectable in computational linguistics to develop a computer system which is a TOOL used by a human expert to access information helpful in arriving at a diagnosis or other conclusion. Perhaps, then, it is time to entertain the possibility that it is also respectable to develop a machine translation system which includes sophisticated linguistic processing yet is designed to be used as a tool for the human translator. 5It is 1983 now, and it seems that the point has still not been made. With the exception of the very fine work on term banks in progress at a number of locations around the world, no-one seems very interested in focusing on the human-oriented translation systems.To the best of my knowledge, the work carried on at the Translation Sciences Institute of Brigham Young University in the 1970s has been the only major research effort to concern itself with interactive translation. Its offshoot, ALPS, is apparently the only commercial enterprise pursuing this avenue of development at present. We at ALPS view this situation with mixed emotions: it is nice to stand apart from the pack, especially as we feel that we are on solid ground, but we are continually amazed that no-one has attempted to challenge our position.Assuming that each of the classes of machine aids has its place, it is important to know what considerations recommend one aid over another for a particular application. There are several factors worthy of consideration.Probably the most obvious consideration is the nature of the text to be translated. Juan Sager reports this lesson learned from the early history of machine translation: 'Documents requiring translation are so diverse in nature that no one system is ever likely to be suitable for all manner of texts; this opens the way for the concurrent development of several systems with different types of objective'.(6) Friedrich Krollmann has given this useful explanation of the amenability of a text to machine translation:One can also categorise texts according to whether the difficulties involved are difficulties of formulation -the extreme case being that of esoteric or highly emotional texts -or difficulties presented by large numbers of specialised terms... That wide sector of translation work in which the translator's freedom of formulation is severely limited covers not only the translation of catalogues but also the translation of technical and scientific texts. The further we move in the direction of specialised vocabulary texts, the more help we can expect from the computer in the actual translation processes, for the time being at any rate; conversely, the practical applicability of the computer declines, the more formulation problems a text poses.(7) The answers to these and a hundred other questions have profound implications for the selection of a translation aid. The translation of newspaper articles for informationgathering purposes is well suited to an automatic system. The translation of a major policy speech to be read to a foreign parliament is better suited to a more human-oriented process.Seven years ago, David Hays, in surveying the field of machine translation, was moved to comment that 'almost everyone hates translators. They arouse our xenophobia by bringing the enemy into our camp. To give them help in their task, or credit for doing it, is loathsome'.(8) I am not sure that we have progressed so very far in the interim. One can still perceive a 'father knows best' attitude on the part of some developers of machine translation. We should actively strive to educate and encourage users of our systems, but never ignore them. Boitet, Chatelin, and Daun Fraga concur that the human and social aspects should not be neglected.To force a rigid system on revisors and translators is a guarantee of failure. It must be realised that AT can only be introduced step by step into some preexisting organizational structure. The translators and revisors of the EC did not only reject Systran because of its poor quality but also because they felt themselves becoming 'slaves of the machine' and condemned to a repetitive and frustrating kind of work. 9In this conjunction, it might be noted that Kay's proposal for a 'translator's amanuensis' and Melby's description of a new interactive translation system both address the challenge of providing a range of translation aids to a competent human translator, who never loses control of the situation. As Kay puts it, 'The system proposed here will accumulate only experience of what was agreed upon between both human and mechanical members of the team, the mechanical always deferring to the human'. 3What is important to consider here is that different systems are well adapted to different users for a number of good and bad reasons. But a well-designed, flexible, user-friendly system will, by its nature, be well adapted to most users.There are a couple of other relevant criteria for the matching of translation aids to tasks which have to do with basic system capabilities. Some aids, interactive and automatic translation, for example, require the source language document to be in machine-readable form. Others (word processing and online dictionary consultation, for instance) work very well in conjunction with hard-copy input. The latter aids would be indicated if the translation requirements did not justify re-keying the contents of a hard-copy source document.Another system consideration is language or language pair availability. A translation system can only be considered for translating documents in the languages it supports. While this is obvious, it has some ramifications for translation system development which may not be so obvious.Consider, for example, the case of a system which attempts to address the translation needs of the United Nations. With six official languages, the UN must translate between thirty (ordered) language pairs. However, it is not the case that every language pair has an equal translation requirement. Chinese to Spanish translations are far less common than English to French translations. Therefore, it is hard to justify the expenditure of similar amounts of time and money in the development of translation aids for these language pairs. This is a general pattern for virtually every type of organisation. A recent survey of translation requirements in twelve industrial nations (with eight major languages and fifty-six language pairs internally) showed that 70 per cent of their total translation volume, including translation to other languages, was generated in twelve language pairs. Six language pairs accounted for 50 per cent of the translation demand, and two language pairs accounted for 20 per cent. 10The conclusion that can be drawn from all this is that translation systems for a handful of language pairs address the majority of the existing translation demand. It would seem reasonable, then, to address the remainder of the demand with translation aids which are more limited in scope and in development cost.Even if it is not clear where each of the machine aids to translation is best applied, it should be obvious that each has its place. No single system is best suited for all applications. One size, alas, does not fit all.Why, then, do some people insist that automatic translation (and here you may substitute interactive translation, translation aids, or writing aids) is never appropriate? Why do others go on as though it were the only possible choice? It is instructive to ask, what is the motivation for enlisting the help of the computer with translation in the first place? Is the interest primarily practical or is it purely academic?Some people seem to feel that anything less ambitious than fully automatic machine translation is not worth pursuing, that resorting to a more synergistic use of man and machine contributions is a cop-out or is cheating somehow. This comment by Margaret Masterman, made in a slightly different context, is worth remembering: 'The object of having a machine to produce translation, after all, is not (as with chess) to take part in international M.T. competitions, but to produce usable translations'.(11) Nor is the object to take a happy, productive translator away from his regular assignments, stick him in front of a terminal, and ask him to help make the computer look good. Martin Kay gives an example of a technology misdeveloped and misapplied:There was a long period -for all I know, it is not yet over -in which the following comedy was acted out nightly in the bowels of an American government office with the aim of rendering foreign texts into English. Passages of innocent prose on which it was desired to effect this delicate and complex operation were subjected to a process of vivisection at the hands of an uncomprehending electronic monster that transformed them into stammering streams of verbal wreckage. These were then placed into only slightly more gentle hands for repair. But the damage had been done. Simple tools that would have done so much to make the repair work easier and more effective were not to be had, presumably because of the voracious appetite of the monster, which left no resources for anything else.In fact, such remedies as could be brought to the tortured remains of these texts were administered with colored pencils on paper and the final copy was produced by the action of human fingers on the keys of a typewriter. In short, one step was singled out of a fairly long and complex process at which to perpetrate automation. The step chosen was by far the least well understood and quite obviously the least apt for this kind of treatment.Government and bureaucracy may be imbued with a sad fatalism that forces it to look to the future as destined to repeat the follies of the past, but we can surely take a moment to wonder at the follies of the past and nostalgically to muse about what a kinder and more rational world would be like. 3Whether the world of the future will be kinder or any more rational is uncertain. What is certain, though, is that it will be a world of our own making and, therefore, a world of our own deserving. The field of machine translation is at a crossroad. We can develop systems which attempt too much or systems which attempt too little. We can develop systems which capitalise on the special strengths of man and machine components or systems which ignore them. We can develop machine-aided translation or machine-impeded translation or some combination of the two.The choice is ours. As for ALPS, we are committed to the goal of developing flexible systems which permit men and machines to interact productively using a set of tools appropriate to the requirements of a wide range of translation tasks.
null
null
null
null
Main paper: : In the more than thirty years which have transpired since Warren Weaver circulated his now famous memo 'Translation', a great many computer systems have been proposed for dealing with the problem of translating between human languages. Many of these systems evidence a real understanding of the ways in which the computer can best be brought to bear in support of the translation process. They are an aid to translation. Other systems are insensitive to the abilities and shortcomings of both man and machine. In spite of the best intentions of their designers, they are often an impediment to translation.The aptness of a translation system is a relative thingrelative to the text to be translated, the needs of the intended audience, and the requirements of the organisation providing the translation, among other considerations. It follows that a general translation system must incorporate a variety of translation aids to match the multiplicity of translation requirements.There is a broad continuum of ways in which man and machine can share the translation responsibility. It ranges from Bar-Hillel's FAHQT (Fully Automatic, High Quality It is generally acknowledged that FAHQT does not exist today. As a result, many people have felt at a loss to describe existing automatic translation systems, all of which require some human intervention to produce high-quality translation. Terms such as 'semi-automatic', 'machine-aided', 'automatised', and 'traditional MT' have been proposed by various observers to refer to this class of translation systems. The only term which has rivalled the standard appellation, 'machine translation', with any degree of success has been 'machine-aided translation'. Unfortunately, this term is taken by many to refer to MAHT and HAMT noted above. The initialisms HTLGI, MAHT, HAMT, and FAHQT (or FAMT, for 'Fully Automatic Machine Translation') have a fairly standard interpretation, but they are hardly transparent to the newcomer to the field of translation technology.In an effort to cut through some of the terminological confusion (and at the risk of compounding it further), may I propose the following straightforward descriptions of the four major points in the continuum: 'writing aids', 'translation aids', 'interactive translation', and 'automatic translation'. A brief description of each will serve to clarify its meaning.'Writing aids' refers to a set of monolingual programs and reference files made available to a writer to help him compose or edit a document. The most basic writing aid is a good word processor. Other writing aids range from spelling and punctuation checkers to style and readability analysers. Inasmuch as translation is a special case of writing, translators can profit from having access to tools which help them write better.'Translation aids' refers to a variety of bilingual and multilingual aids which stop short of proposing whole sentence translations. They include such diverse aids as term bank systems and systems for spotting existing translations. Translation aids have proven effective in increasing translator productivity, quality, and satisfaction.'Interactive translation' refers to any system in which the computer produces a translation of complete sentences under the interactive guidance of a human operator. It differs from translation aids in that with translation aids the human assumes the primary role in producing a translation, whereas with interactive translation the computer takes the lead.Interactive translation differs from automatic translation in its provision for consulting with a human operator during the translation process.'Automatic translation' refers to any system in which the computer produces a translation of complete sentences based entirely on its own resources (algorithms, grammars, and dictionaries). The fact that texts which are translated with an automatic translation system may be subject to pre-editing or post-editing does not make the translation system itself any less automatic.The notion that the range of useful machine aids to translation encompasses more than automatic translation is not new. It was one of the major conclusions of the ALPAC report in 1966.In 1976 Bernard Vauquois made the following recommendation: 'Consider now the feasibility of A.T. systems which merge human translators and the computer in a hybrid process. We can imagine several different strategies',(1) whereupon he gave a brief description of a pre-edit/post-edit system and an interactive system, calling the latter 'the ideal way for the future'.In 1979 an international committee of experts in machine translation gathered in Belgium under the auspices of the International Federation for Documentation (FID) and the International Association of Applied Linguistics (AILA). Donald Walker and Hans Karlgren reported this conclusion reached by the committee: 'Encouraging developments are expected in the area of refined combinations of machine and human cooperation, rather than attempts at complete automatization. Mere post-editing of machine output does not seem to be a realistic way of producing adequate translation'. (2) In 1980 Martin Kay published his marvellous essay, 'The proper place of men and machines in language translation'. In it he stated:The need for translated texts will not be filled by a program of research that devotes all of its resources to a distant ideal, and linguists and computer experts will be denied the proper rewards of their labors if they must promise to reach the ideal by some specific time. A healthy climate for FAHQT will be one in which a variety of different though related goals are being pursued with equal vigor for the intellectual and practical benefits that they may bring. (3)In 1981 at a workshop on 'Applied computational linguistics in perspective', a panel on machine translation, chaired by Martin Kay, based its recommendations on this observation:The translation problem is real and will in fact rapidly reach crisis proportions unless some action is taken... The only hope for a thoroughgoing solution seems to lie with technology. But this is not to say that there is only one solution, namely machine translation, in the classical sense of a fully automatic procedure that carries a text from one language to another with human intervention only in the final revision. There is, in fact, a continuum of ways in which technology could be brought to bear, with fully automatic translation at one extreme, and word processing equipment and dictating machines at the other. 4In 1982, at COLING 82 held in Prague, Alan Melby raised the issue once more:It is now quite respectable in computational linguistics to develop a computer system which is a TOOL used by a human expert to access information helpful in arriving at a diagnosis or other conclusion. Perhaps, then, it is time to entertain the possibility that it is also respectable to develop a machine translation system which includes sophisticated linguistic processing yet is designed to be used as a tool for the human translator. 5It is 1983 now, and it seems that the point has still not been made. With the exception of the very fine work on term banks in progress at a number of locations around the world, no-one seems very interested in focusing on the human-oriented translation systems.To the best of my knowledge, the work carried on at the Translation Sciences Institute of Brigham Young University in the 1970s has been the only major research effort to concern itself with interactive translation. Its offshoot, ALPS, is apparently the only commercial enterprise pursuing this avenue of development at present. We at ALPS view this situation with mixed emotions: it is nice to stand apart from the pack, especially as we feel that we are on solid ground, but we are continually amazed that no-one has attempted to challenge our position.Assuming that each of the classes of machine aids has its place, it is important to know what considerations recommend one aid over another for a particular application. There are several factors worthy of consideration.Probably the most obvious consideration is the nature of the text to be translated. Juan Sager reports this lesson learned from the early history of machine translation: 'Documents requiring translation are so diverse in nature that no one system is ever likely to be suitable for all manner of texts; this opens the way for the concurrent development of several systems with different types of objective'.(6) Friedrich Krollmann has given this useful explanation of the amenability of a text to machine translation:One can also categorise texts according to whether the difficulties involved are difficulties of formulation -the extreme case being that of esoteric or highly emotional texts -or difficulties presented by large numbers of specialised terms... That wide sector of translation work in which the translator's freedom of formulation is severely limited covers not only the translation of catalogues but also the translation of technical and scientific texts. The further we move in the direction of specialised vocabulary texts, the more help we can expect from the computer in the actual translation processes, for the time being at any rate; conversely, the practical applicability of the computer declines, the more formulation problems a text poses.(7) The answers to these and a hundred other questions have profound implications for the selection of a translation aid. The translation of newspaper articles for informationgathering purposes is well suited to an automatic system. The translation of a major policy speech to be read to a foreign parliament is better suited to a more human-oriented process.Seven years ago, David Hays, in surveying the field of machine translation, was moved to comment that 'almost everyone hates translators. They arouse our xenophobia by bringing the enemy into our camp. To give them help in their task, or credit for doing it, is loathsome'.(8) I am not sure that we have progressed so very far in the interim. One can still perceive a 'father knows best' attitude on the part of some developers of machine translation. We should actively strive to educate and encourage users of our systems, but never ignore them. Boitet, Chatelin, and Daun Fraga concur that the human and social aspects should not be neglected.To force a rigid system on revisors and translators is a guarantee of failure. It must be realised that AT can only be introduced step by step into some preexisting organizational structure. The translators and revisors of the EC did not only reject Systran because of its poor quality but also because they felt themselves becoming 'slaves of the machine' and condemned to a repetitive and frustrating kind of work. 9In this conjunction, it might be noted that Kay's proposal for a 'translator's amanuensis' and Melby's description of a new interactive translation system both address the challenge of providing a range of translation aids to a competent human translator, who never loses control of the situation. As Kay puts it, 'The system proposed here will accumulate only experience of what was agreed upon between both human and mechanical members of the team, the mechanical always deferring to the human'. 3What is important to consider here is that different systems are well adapted to different users for a number of good and bad reasons. But a well-designed, flexible, user-friendly system will, by its nature, be well adapted to most users.There are a couple of other relevant criteria for the matching of translation aids to tasks which have to do with basic system capabilities. Some aids, interactive and automatic translation, for example, require the source language document to be in machine-readable form. Others (word processing and online dictionary consultation, for instance) work very well in conjunction with hard-copy input. The latter aids would be indicated if the translation requirements did not justify re-keying the contents of a hard-copy source document.Another system consideration is language or language pair availability. A translation system can only be considered for translating documents in the languages it supports. While this is obvious, it has some ramifications for translation system development which may not be so obvious.Consider, for example, the case of a system which attempts to address the translation needs of the United Nations. With six official languages, the UN must translate between thirty (ordered) language pairs. However, it is not the case that every language pair has an equal translation requirement. Chinese to Spanish translations are far less common than English to French translations. Therefore, it is hard to justify the expenditure of similar amounts of time and money in the development of translation aids for these language pairs. This is a general pattern for virtually every type of organisation. A recent survey of translation requirements in twelve industrial nations (with eight major languages and fifty-six language pairs internally) showed that 70 per cent of their total translation volume, including translation to other languages, was generated in twelve language pairs. Six language pairs accounted for 50 per cent of the translation demand, and two language pairs accounted for 20 per cent. 10The conclusion that can be drawn from all this is that translation systems for a handful of language pairs address the majority of the existing translation demand. It would seem reasonable, then, to address the remainder of the demand with translation aids which are more limited in scope and in development cost.Even if it is not clear where each of the machine aids to translation is best applied, it should be obvious that each has its place. No single system is best suited for all applications. One size, alas, does not fit all.Why, then, do some people insist that automatic translation (and here you may substitute interactive translation, translation aids, or writing aids) is never appropriate? Why do others go on as though it were the only possible choice? It is instructive to ask, what is the motivation for enlisting the help of the computer with translation in the first place? Is the interest primarily practical or is it purely academic?Some people seem to feel that anything less ambitious than fully automatic machine translation is not worth pursuing, that resorting to a more synergistic use of man and machine contributions is a cop-out or is cheating somehow. This comment by Margaret Masterman, made in a slightly different context, is worth remembering: 'The object of having a machine to produce translation, after all, is not (as with chess) to take part in international M.T. competitions, but to produce usable translations'.(11) Nor is the object to take a happy, productive translator away from his regular assignments, stick him in front of a terminal, and ask him to help make the computer look good. Martin Kay gives an example of a technology misdeveloped and misapplied:There was a long period -for all I know, it is not yet over -in which the following comedy was acted out nightly in the bowels of an American government office with the aim of rendering foreign texts into English. Passages of innocent prose on which it was desired to effect this delicate and complex operation were subjected to a process of vivisection at the hands of an uncomprehending electronic monster that transformed them into stammering streams of verbal wreckage. These were then placed into only slightly more gentle hands for repair. But the damage had been done. Simple tools that would have done so much to make the repair work easier and more effective were not to be had, presumably because of the voracious appetite of the monster, which left no resources for anything else.In fact, such remedies as could be brought to the tortured remains of these texts were administered with colored pencils on paper and the final copy was produced by the action of human fingers on the keys of a typewriter. In short, one step was singled out of a fairly long and complex process at which to perpetrate automation. The step chosen was by far the least well understood and quite obviously the least apt for this kind of treatment.Government and bureaucracy may be imbued with a sad fatalism that forces it to look to the future as destined to repeat the follies of the past, but we can surely take a moment to wonder at the follies of the past and nostalgically to muse about what a kinder and more rational world would be like. 3Whether the world of the future will be kinder or any more rational is uncertain. What is certain, though, is that it will be a world of our own making and, therefore, a world of our own deserving. The field of machine translation is at a crossroad. We can develop systems which attempt too much or systems which attempt too little. We can develop systems which capitalise on the special strengths of man and machine components or systems which ignore them. We can develop machine-aided translation or machine-impeded translation or some combination of the two.The choice is ours. As for ALPS, we are committed to the goal of developing flexible systems which permit men and machines to interact productively using a set of tools appropriate to the requirements of a wide range of translation tasks. Appendix:
null
null
null
null
{ "paperhash": [ "melby|multi-level_translation_aids_in_a_distributed_system", "karlgren|computer_aids_in_translation", "boitet|present_and_future_paradigms_in_the_automatized_translation_of_natural_languages.", "masterman|the_essential_skills_to_be_acquired_for_machine_translation", "krollmann|data_processing_at_the_translator's_service" ], "title": [ "Multi-Level Translation Aids in a Distributed System", "COMPUTER AIDS IN TRANSLATION", "Present and Future Paradigms in the Automatized Translation of Natural Languages.", "The essential skills to be acquired for machine translation", "Data processing at the translator's service" ], "abstract": [ "At COLING80, we reported on an Interactive Translation System called ITS. We will discuss three problems in the design of the first version of ITS: (1) human factors, (2) the \"all or nothing\" syndrome, and (3) traditional centralized processing. We will also discuss a new version of ITS, which is now being programmed. This new version will hopefully overcome these problems by placing the translator in control, providing multiple levels of aid, and distributing the processing.", "The presentation is based on the results of an international meeting for establishing the state of the art and for making recommendations for continued research and development in the field. The meeting was arranged by Kval on behalf of the AIL A Commission for Computational Linguistics in conjunction with the Committee for Linguistics in Documentation within the International Federation for Documentation (FID/LD). See statement on pp 99–101. The conclusions below, though in tune with that statement, have been formulated by the author, who therefore alone carries the responsibility for them.", "Useful automatized translation must be considered in a problem-solving setting, composed of a linguistic environment and a computer environment. We examine the facets of the problem which we believe to be essential, and try to give some paradigms along each of them. Those facets are the linguistic strategy, the programming tools, the treatment of semantics, the computer environment and the types of implementation.", "It is exceedingly timely that there should be, at this seminar, a high-level scrutiny of the relations between \"translator\" and \"machine\": because, as we all know, there is also a world-wide expansion of the need to translate. Owing to improvements in telecommunications, the earth is becoming a global village; but in every house of the village, the inhabitants still speak each their different language, and this fact affects both individuals and corporations.", "Our theme this afternoon is one which reflects the influence of modern technology — or rather of various modern technologies — on an activity which is of interest to all of us — translation. The obvious question arising is: in which areas and to what extent can data processing be employed to make the work of the translating and allied professions easier? This question, however, requires immediate qualification. When I refer to electronic data processing I mean, of course, not the constantly expanding fields of application in which this new branch of technology is establishing itself but he limited field of (computational linguistics which, though a sub-branch of non-numeric data processing, is not necessarily to be taken as identical to the latter. The bipolar relationship between computational linguistics and translation science involves two components, \"data processing for translation\" and \"translation science (including other branches of applied linguistics, such as terminology) for data processing\". The short time available limits me, for all practical purposes, to saying a few words on the former of these two components. Before the Congress commenced, I was informed that many of those present today come from the ranks of literary translators, a fact which does not make my task of elucidating on the above considerations any the easier. Data processing is a specialised techno-scientific field based on strict logicalmathematical procedures and on methods of representation which can be structured and explicitly formulated. This can by no means be said to apply to literary translation, and we can be thankful for this small mercy, for to apply the processes of data processing to literary translation would be to pave the way for the triumphant revival of prescriptive perfectionism and the classical formalism associated therewith. However, I am sure that not everybody present in this hall is a literary translator; indeed, I feel safe in assuming that a World Congress is per se representative of all aspects of translation. Of course, the extent to which what I have to say on the use of computers will be of interest over such a wide range of translating activity will vary considerably according to the type of text to be translated. And when categorising types of text, the categories must correspond to activities ranging from, on the one hand, interlingual refabrication as, for example, in the case of lyric poetry to the direct transfer of sparepart and stock catalogues on the other, whereby the transition from the one extreme to the other is a continuous process through infinite intermediate shades. For translation is all of this. One can also categorise texts according to whether the difficulties involved are difficulties of formulation — the extreme case being that of esoteric or highly emotional texts — or difficulties presented by large numbers of specialised terms. Of course, there will be borderline cases of texts to which both these criteria can be applied. To deal with such cases the translator will require not only a well-developed skill for formulation but also an extensive specialised vocabulary. But that wide sector of translation work in which the translator's freedom of formulation is severely limited covers not only the translation of catalogues but also the translation of technical and scientific texts. The further we move in the direction of specialised vocabulary texts, the more help we can expect from the computer in the actual translation processes, for the time being at any rate; conversely, the practical applicability of the computer declines the more formulation problems a text poses. This applies, as I have already indicated, only to the practical process of translation as such. When we consider the theoretical investigation of the translation process from a linguistic point of view we are faced with an entirely different situation. However, practical limitations to the scope of my talk" ], "authors": [ { "name": [ "A. Melby" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "H. Karlgren" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "C. Boitet", "Philippe Chatelin", "P. D. Fraga" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. Masterman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Friedrich Krollmann" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null ], "s2_corpus_id": [ "8310536", "171025270", "1807458", "236999924", "44845168" ], "intents": [ [], [], [], [], [] ], "isInfluential": [ false, false, false, false, false ] }
Problem: The paper addresses the need for a range of machine aids to translation to cater to diverse translation requirements and the limitations of existing automatic translation systems. Solution: The hypothesis proposes that a variety of translation aids, including writing aids, translation aids, interactive translation, and automatic translation, are necessary to effectively support human translators in producing high-quality translations across different text types and languages.
495
0.00202
null
null
null
null
null
null
null
null
0317203d2890ed3469cb7a3e940aac4b30523071
36679484
null
Recent developments in practical machine translation
A brief survey of progress on operational systems with particular reference to practical use and developments. Aspects covered are word processing, text handling, types of user, comparative analysis of systems, cost and performance evaluations, and experience gained at various levels.
{ "name": [ "Pigott, Ian M." ], "affiliation": [ null ] }
null
null
Proceedings of Translating and the Computer 5: Tools for the trade
1983-11-01
0
0
null
Those of you who attended the 1981 conference Practical Experience of Machine Translation* may have been somewhat surprised to see how much could be said at the time by speakers who in many cases had only very limited experience of practical machine translation (MT). Many must have had their doubts about the usefulness of the systems presented, while others must have been more than just a little sceptical about the potential of MT as an aid to translators.Today we are to take a new look at the state of the art. As can be seen from the programme, a great deal has happened in the last two years. New systems have been developed, older systems have been improved, more users have emerged and a substantial amount of additional experience has been gained.Two years ago computers were still regarded by the * LAWSON, V. (ed.). Practical experience of machine translation. Proceedings of the third 'Translating and the Computer' conference, London, 5-6 November 1981 . Amsterdam: North-Holland, 1982 Tools for the Trade, V. Lawson (ed.) . © Aslib and Ian M. Pigott. general public as rather frightening pieces of space-age equipment to be used by highly specialised experts for performing highly technical tasks. Today, with the explosion of personal computers, word processors, electronic games and fully automated banking systems, people have begun to recognise that machines can in fact perform a whole range of useful tasks which until recently required considerable human effort. Even translators -whose working methods are still generally very similar to those used over the centuries -are now starting to realise that computers can and will play an ever increasing role in their day-to-day work.The growing interest in MT can be seen from a few rough statistics. For example, it would seem that over the past twelve months some 400,000 pages of translation have been run in the production environment -an astoundingly high figure if we consider that up to 600 translators working full-time would have been required to achieve the same level of output using conventional methods.A total of eleven language pairs are now being offered by the major manufacturers and several more are under development. The most notable additions are perhaps the more exotic language combinations such as Japanese-English or English-Arabic.We have already seen how text processing and telecommunications networks can be used to streamline some of the more traditional aspects of translation processing. Acceptance of such aids by translators is to be welcomed, particularly as the processing of machine translation and its growing success are intimately linked to the translator's willingness to make use of word processing facilities. In this context, the editing of raw machine translations on-screen is not to be forgotten.Perhaps the most important recent development on the practical MT front has indeed been the sophistication of peripherals used to streamline connections between translation software and office systems.The availability of word processors has led to the development of a variety of text-handling programs for preserving page presentation, enhancing the quality of printouts and ensuring reliable cataloguing and archiving of source and target texts.Simplified menu systems have been developed for submitting documents and facilitating telecommunications for MT processing, often leading to dramatic improvements in rapidity and ease of operation. As a result, several systems including ALPS, Logos, Smart and Weidner are now available in bureau service, and others are likely to follow. In addition, some manufacturers have made considerable progress in miniaturising hardware requirements. Most impressive here is the availability of the Weidner system on the IBM and ICL personal computers, but the Logos Corporation should also be congratulated on producing a software package available as an option on Wang office systems. Such developments would have appeared impossible a few years ago, when MT could only run successfully on large mainframe computers.Such developments should be monitored carefully, as they will not only bring MT facilities to smaller firms and translation agencies but may well provide the means for individual translators to tune into MT through personal computer networks. In addition they will lead to increasing competition between MT suppliers, which will no doubt result in cheaper, more efficient service for the user.While the quality of raw MT has steadily improved as systems and their dictionaries have expanded to meet the needs of an ever increasing number of users, there have been few really striking developments in the linguistic approaches to practical MT. By and large the older systems have continued to have the greatest success, but all manufacturers will of course say that their system is the best.Spanam, based on the Russian-English Georgetown system of the sixties, has been successfully used for translating large volumes of text from Spanish into English and is now being expanded to cover English-Spanish.Logos, originally developed for English-Vietnamese, is now producing encouraging results for German-English and is available as an option on Wang office systems. The integrated Weidner system has now been installed at a number of locations and appears to be serving as a useful aid for an increasing number of language pairs. ALPS, available in five language combinations from English, has achieved considerable success in at least one large translation agency and is now widely available in bureau service through Control Data.Even the rather elementary Smart system, which unfortunately is not represented here today, has progressed enormously from the point of view of actual usership over the last two years. This system, which aims solely at clear information transfer, has attracted dozens of new users in North America for its four language pairs (English into French, Spanish, German and Italian), including the Canadian Department of Employment and Immigration and the Caterpillar Corporation. Volumes of up to 900,000 words per month are now being handled with considerable user satisfaction. I am sure we shall hear a lot more about the success of this approach in the months to come.While the TAUM Meteo system continues to be used by the Canadian government for around-the-clock translation of weather bulletins from English into French, the more ambitious Aviation project has now been completely abandoned owing to the discontinuation of funding. It is difficult to judge whether the system's disappointing performance was a result of the linguistic approach or of the basic software design. One important factor was certainly the cost of dictionary expansion, which was said to amount to as much as $40 per entry compared to less than $5 for most other systems.On the other hand, two European systems, the French GETA and the German Susy, seem finally to be coming into some kind of practical use. I am told that the GETA system has successfully translated 30,000 words of Russian into French, while the Susy system is undergoing pilot tests at five organisations including the German Patent Office and the European Space Agency. Unfortunately, I have been unable to obtain any user reactions to these systems.Again in regard to the recent upgrading of second generation systems to the production environment, Siemens has decided to finance extensive German-English development of the METAL system put together by the University of Texas Linguistics Research Center in Austin. I understand that Siemens has already achieved considerable progress and that they feel METAL will provide more acceptable results than Logos, which the company had once been involved in developing. It will be interesting to see how the system performs in a fully operational environment.As regards Systran, considerable progress has been made on the Japanese-English and English-Japanese systems, which are now available for use from the Systran Japan Corporation. Two new pairs, English-German and French-German, have been developed by the European Commission and will soon be used in production work. Systran Institut, Germany, has been devoting considerable efforts to the English-Arabic and German-English systems, which may well attract customers in the coming months.Apart from the European Commission's own application, about which we will be hearing more today, Systran has also seen several new users since the 1981 conference, some of them with quite formidable volumes of material for MT processing. In Europe, the Commission's French-English and English-French Systran systems are now being used by Karlsruhe Nuclear Research Centre (Kernforschungszentrum Karlsruhe) for nuclear documentation, and in France by SNIAS (Societé Nationale de l'Industrie Aérospatiale) for the aviation sector and by CNRS (National Scientific Research Centre) for the translation of a wide variety of documentary databases. The US Air Force has installed Systran French-English for information scanning and Wang Laboratories is using English-French and English-German for translating maintenance manuals. Finally, the Xerox Corporation, which has been using Systran exclusively for translating its own maintenance manuals into a variety of target languages, has now opened up a bureau service for the North American market.A number of problems still face the majority of new users. Not all systems are available for general use, and those that are often require a considerable amount of additional development before they can be brought into effective operation for a new application. By and large manufacturers still tend to oversell their systems, stating for example that MT will quadruple the average translator's normal production, make for consistent terminology and provide an efficient means of producing camera-ready copy.These claims are to some extent justified. Many users of MT would be prepared to admit to such levels of success but only for certain language pairs and only after having undertaken extensive development work -particularly with regard to terminology -and after learning by trial and error which documents are suitable for MT and which translators or technical editors are ready and able to correct raw MT output.Another interesting phenomenon is the degree to which present and potential users have begun to compare the results of machine translation by running the same texts through different MT systems or through different versions of the same system. This is now possible since several systems offer the same language pairs. English-French, for instance, is available (in alphabetical order) on ALPS, Smart, Systran, TAUM, TITUS and Weidner, while German-English is available on Logos, METAL, TITUS, and on two versions of Systran.Unfortunately, the results of such comparisons are generally based on a number of subjective factors, the most usual being previous experience in the use or development of a system. Translators who have used batch systems such as Logos, Smart or Systran will find it difficult to adapt to the more interactive approach required for ALPS or Weidner. Similarly, if a user has, over the months or years, taken pains to eliminate errors of syntax or terminology from a given system, he will naturally not like to see the same errors reappearing in output from another system. What he might not realise, of course, is that users of the other system would react in the same way to output from his system. What all this shows is that there is a definite tendency for system developers to base enhancements on the needs of existing users, with little or no concern for as yet unidentified new users. A system, or more correctly, a system's language pair, which has been used primarily for translating computer manuals -and may have indeed been developed for this application -will seldom perform well for the translation of financial or administrative texts which have quite different syntax, terminology and format.Is there then any objective means of judging the cost, quality, suitability and general performance of a system for a new user? Factors which obviously deserve consideration here are:-the availability of systems for the language pairs in question; -the cost of installing the system or using it on a rental or bureau basis; -the present performance of the system for a given subject field; -the ease and cost of further developing the system to cater for specific needs; and -the willingness or ability of staff to maintain and use the system in practice.Last but not least, it is important to establish what level of final quality is required, as MT can often produce good rough translations with only a limited amount of post-editing. Perhaps the best way to obtain this type of information is by consulting existing users in the same or similar fields, in order to assess how much effort they needed to invest before the system could be used successfully in production. Not to be underestimated here are the ease, extent and cost of additional dictionary and general development work required to adapt a system to a level of quality sufficiently high for post-editors to be able to work more quickly than by traditional methods. Another important consideration is the amount of special training or experience required for handling the human side of the process, particularly in regard to text entry and post-editing. But all in all, given the fact that computer costs are rapidly decreasing while human costs are steadily rising, the single most important factor to be considered is the extent to which users -particularly translators -have been prepared to adapt to the new approach for routine production work. If translators are prepared to admit that post-editing has resulted in time savings, then the overall benefits will be even higher, as MT can be linked to office systems and communications networks offering sharp reductions in document handling, typing and publication times.I am confident that many of these issues will be raised in the papers to be presented today and in the ensuing discussions. After all, what really counts in machine translation is not so much the level of quality achievable by adopting the latest results of linguistics or informatics research, but the extent of the assistance MT can give to practising translators.The past two years have certainly seen a great deal of practical progress in the use of machine translation, and I am sure that today's exchange of ideas will be of benefit to existing and potential users alike. While a measure of healthy competition between manufacturers and even between users is to be expected, at this stage in the game we all have much to learn from the views of anyone who has had hands-on experience of MT and is able to report on its progress and shortcomings.With the benefit of today's discussions, I very much hope we shall all be able to meet once again in two or three years' time and report still more success in the use of practical MT and its contribution to multilingual communications at all levels.Ian M. Pigott, Systran Project Leader, Commission of the European Communities, JMO B4/24A, BP 1907, Luxembourg GD.
null
null
null
null
Main paper: introduction: Those of you who attended the 1981 conference Practical Experience of Machine Translation* may have been somewhat surprised to see how much could be said at the time by speakers who in many cases had only very limited experience of practical machine translation (MT). Many must have had their doubts about the usefulness of the systems presented, while others must have been more than just a little sceptical about the potential of MT as an aid to translators.Today we are to take a new look at the state of the art. As can be seen from the programme, a great deal has happened in the last two years. New systems have been developed, older systems have been improved, more users have emerged and a substantial amount of additional experience has been gained.Two years ago computers were still regarded by the * LAWSON, V. (ed.). Practical experience of machine translation. Proceedings of the third 'Translating and the Computer' conference, London, 5-6 November 1981 . Amsterdam: North-Holland, 1982 Tools for the Trade, V. Lawson (ed.) . © Aslib and Ian M. Pigott. general public as rather frightening pieces of space-age equipment to be used by highly specialised experts for performing highly technical tasks. Today, with the explosion of personal computers, word processors, electronic games and fully automated banking systems, people have begun to recognise that machines can in fact perform a whole range of useful tasks which until recently required considerable human effort. Even translators -whose working methods are still generally very similar to those used over the centuries -are now starting to realise that computers can and will play an ever increasing role in their day-to-day work.The growing interest in MT can be seen from a few rough statistics. For example, it would seem that over the past twelve months some 400,000 pages of translation have been run in the production environment -an astoundingly high figure if we consider that up to 600 translators working full-time would have been required to achieve the same level of output using conventional methods.A total of eleven language pairs are now being offered by the major manufacturers and several more are under development. The most notable additions are perhaps the more exotic language combinations such as Japanese-English or English-Arabic.We have already seen how text processing and telecommunications networks can be used to streamline some of the more traditional aspects of translation processing. Acceptance of such aids by translators is to be welcomed, particularly as the processing of machine translation and its growing success are intimately linked to the translator's willingness to make use of word processing facilities. In this context, the editing of raw machine translations on-screen is not to be forgotten.Perhaps the most important recent development on the practical MT front has indeed been the sophistication of peripherals used to streamline connections between translation software and office systems.The availability of word processors has led to the development of a variety of text-handling programs for preserving page presentation, enhancing the quality of printouts and ensuring reliable cataloguing and archiving of source and target texts.Simplified menu systems have been developed for submitting documents and facilitating telecommunications for MT processing, often leading to dramatic improvements in rapidity and ease of operation. As a result, several systems including ALPS, Logos, Smart and Weidner are now available in bureau service, and others are likely to follow. In addition, some manufacturers have made considerable progress in miniaturising hardware requirements. Most impressive here is the availability of the Weidner system on the IBM and ICL personal computers, but the Logos Corporation should also be congratulated on producing a software package available as an option on Wang office systems. Such developments would have appeared impossible a few years ago, when MT could only run successfully on large mainframe computers.Such developments should be monitored carefully, as they will not only bring MT facilities to smaller firms and translation agencies but may well provide the means for individual translators to tune into MT through personal computer networks. In addition they will lead to increasing competition between MT suppliers, which will no doubt result in cheaper, more efficient service for the user.While the quality of raw MT has steadily improved as systems and their dictionaries have expanded to meet the needs of an ever increasing number of users, there have been few really striking developments in the linguistic approaches to practical MT. By and large the older systems have continued to have the greatest success, but all manufacturers will of course say that their system is the best.Spanam, based on the Russian-English Georgetown system of the sixties, has been successfully used for translating large volumes of text from Spanish into English and is now being expanded to cover English-Spanish.Logos, originally developed for English-Vietnamese, is now producing encouraging results for German-English and is available as an option on Wang office systems. The integrated Weidner system has now been installed at a number of locations and appears to be serving as a useful aid for an increasing number of language pairs. ALPS, available in five language combinations from English, has achieved considerable success in at least one large translation agency and is now widely available in bureau service through Control Data.Even the rather elementary Smart system, which unfortunately is not represented here today, has progressed enormously from the point of view of actual usership over the last two years. This system, which aims solely at clear information transfer, has attracted dozens of new users in North America for its four language pairs (English into French, Spanish, German and Italian), including the Canadian Department of Employment and Immigration and the Caterpillar Corporation. Volumes of up to 900,000 words per month are now being handled with considerable user satisfaction. I am sure we shall hear a lot more about the success of this approach in the months to come.While the TAUM Meteo system continues to be used by the Canadian government for around-the-clock translation of weather bulletins from English into French, the more ambitious Aviation project has now been completely abandoned owing to the discontinuation of funding. It is difficult to judge whether the system's disappointing performance was a result of the linguistic approach or of the basic software design. One important factor was certainly the cost of dictionary expansion, which was said to amount to as much as $40 per entry compared to less than $5 for most other systems.On the other hand, two European systems, the French GETA and the German Susy, seem finally to be coming into some kind of practical use. I am told that the GETA system has successfully translated 30,000 words of Russian into French, while the Susy system is undergoing pilot tests at five organisations including the German Patent Office and the European Space Agency. Unfortunately, I have been unable to obtain any user reactions to these systems.Again in regard to the recent upgrading of second generation systems to the production environment, Siemens has decided to finance extensive German-English development of the METAL system put together by the University of Texas Linguistics Research Center in Austin. I understand that Siemens has already achieved considerable progress and that they feel METAL will provide more acceptable results than Logos, which the company had once been involved in developing. It will be interesting to see how the system performs in a fully operational environment.As regards Systran, considerable progress has been made on the Japanese-English and English-Japanese systems, which are now available for use from the Systran Japan Corporation. Two new pairs, English-German and French-German, have been developed by the European Commission and will soon be used in production work. Systran Institut, Germany, has been devoting considerable efforts to the English-Arabic and German-English systems, which may well attract customers in the coming months.Apart from the European Commission's own application, about which we will be hearing more today, Systran has also seen several new users since the 1981 conference, some of them with quite formidable volumes of material for MT processing. In Europe, the Commission's French-English and English-French Systran systems are now being used by Karlsruhe Nuclear Research Centre (Kernforschungszentrum Karlsruhe) for nuclear documentation, and in France by SNIAS (Societé Nationale de l'Industrie Aérospatiale) for the aviation sector and by CNRS (National Scientific Research Centre) for the translation of a wide variety of documentary databases. The US Air Force has installed Systran French-English for information scanning and Wang Laboratories is using English-French and English-German for translating maintenance manuals. Finally, the Xerox Corporation, which has been using Systran exclusively for translating its own maintenance manuals into a variety of target languages, has now opened up a bureau service for the North American market.A number of problems still face the majority of new users. Not all systems are available for general use, and those that are often require a considerable amount of additional development before they can be brought into effective operation for a new application. By and large manufacturers still tend to oversell their systems, stating for example that MT will quadruple the average translator's normal production, make for consistent terminology and provide an efficient means of producing camera-ready copy.These claims are to some extent justified. Many users of MT would be prepared to admit to such levels of success but only for certain language pairs and only after having undertaken extensive development work -particularly with regard to terminology -and after learning by trial and error which documents are suitable for MT and which translators or technical editors are ready and able to correct raw MT output.Another interesting phenomenon is the degree to which present and potential users have begun to compare the results of machine translation by running the same texts through different MT systems or through different versions of the same system. This is now possible since several systems offer the same language pairs. English-French, for instance, is available (in alphabetical order) on ALPS, Smart, Systran, TAUM, TITUS and Weidner, while German-English is available on Logos, METAL, TITUS, and on two versions of Systran.Unfortunately, the results of such comparisons are generally based on a number of subjective factors, the most usual being previous experience in the use or development of a system. Translators who have used batch systems such as Logos, Smart or Systran will find it difficult to adapt to the more interactive approach required for ALPS or Weidner. Similarly, if a user has, over the months or years, taken pains to eliminate errors of syntax or terminology from a given system, he will naturally not like to see the same errors reappearing in output from another system. What he might not realise, of course, is that users of the other system would react in the same way to output from his system. What all this shows is that there is a definite tendency for system developers to base enhancements on the needs of existing users, with little or no concern for as yet unidentified new users. A system, or more correctly, a system's language pair, which has been used primarily for translating computer manuals -and may have indeed been developed for this application -will seldom perform well for the translation of financial or administrative texts which have quite different syntax, terminology and format.Is there then any objective means of judging the cost, quality, suitability and general performance of a system for a new user? Factors which obviously deserve consideration here are:-the availability of systems for the language pairs in question; -the cost of installing the system or using it on a rental or bureau basis; -the present performance of the system for a given subject field; -the ease and cost of further developing the system to cater for specific needs; and -the willingness or ability of staff to maintain and use the system in practice.Last but not least, it is important to establish what level of final quality is required, as MT can often produce good rough translations with only a limited amount of post-editing. Perhaps the best way to obtain this type of information is by consulting existing users in the same or similar fields, in order to assess how much effort they needed to invest before the system could be used successfully in production. Not to be underestimated here are the ease, extent and cost of additional dictionary and general development work required to adapt a system to a level of quality sufficiently high for post-editors to be able to work more quickly than by traditional methods. Another important consideration is the amount of special training or experience required for handling the human side of the process, particularly in regard to text entry and post-editing. But all in all, given the fact that computer costs are rapidly decreasing while human costs are steadily rising, the single most important factor to be considered is the extent to which users -particularly translators -have been prepared to adapt to the new approach for routine production work. If translators are prepared to admit that post-editing has resulted in time savings, then the overall benefits will be even higher, as MT can be linked to office systems and communications networks offering sharp reductions in document handling, typing and publication times.I am confident that many of these issues will be raised in the papers to be presented today and in the ensuing discussions. After all, what really counts in machine translation is not so much the level of quality achievable by adopting the latest results of linguistics or informatics research, but the extent of the assistance MT can give to practising translators.The past two years have certainly seen a great deal of practical progress in the use of machine translation, and I am sure that today's exchange of ideas will be of benefit to existing and potential users alike. While a measure of healthy competition between manufacturers and even between users is to be expected, at this stage in the game we all have much to learn from the views of anyone who has had hands-on experience of MT and is able to report on its progress and shortcomings.With the benefit of today's discussions, I very much hope we shall all be able to meet once again in two or three years' time and report still more success in the use of practical MT and its contribution to multilingual communications at all levels.Ian M. Pigott, Systran Project Leader, Commission of the European Communities, JMO B4/24A, BP 1907, Luxembourg GD. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
495
0
null
null
null
null
null
null
null
null
63156145104d64c2f89a031bd342196534424881
237295821
null
A glossary on your word processor
A text-related glossary can be prepared with other data before a draft translation is produced. It can be updated or expanded while the draft is being entered, during proofreading of the translation or later. The word processor can also be used to compile other special-purpose glossaries.
{ "name": [ "Samuelsson-Brown, Geoffrey" ], "affiliation": [ null ] }
null
null
Proceedings of Translating and the Computer 5: Tools for the trade
1983-11-01
0
0
null
Perhaps a brief introduction about the way I work would be in order. I have two DFE System 50 word processors, each with twin 8-inch disk drives and printers. One of the machines is operated by a full-time secretary and the other is used for editing and administration. The majority of the work is dictated, though for convenience I type some small jobs on one of the machines. The disks have capacities of 286 and 580 kilobytes according to whether they are single or double density. They permit the storage of between 200 and 500 pages of text, depending on how the text is formatted.Why use a word processor to produce your own glossary if there are perfectly good glossaries available in print?Inevitably the glossaries which are commercially available contain a vast number of terms you already know -but, to your annoyance, they lack precisely the ones you can't find. Some, as you know, are not worth the money you spend on them -a situation you cannot always avoid when buying books from abroad without being able to see them first. This is where the compilation of your own glossaries and the word processor come into use. Another aspect is that particular companies have their own ideas on terminology. What is acceptable to one company may not be Tools for the Trade, V. Lawson (ed.) . © Geoffrey Samuelson-Brown, 1985. acceptable to another even though both words may mean the same thing. Don't be frightened of asking the client for any information. Unless he is dealing with the market concerned for the first time he is more than likely to have some background material which, though it might not say much to him, may provide some vital clues to you.When you have all the material ready for a job then you can start with your initial glossary compilation.The purpose of having a word processor is to cut down on work. If you work directly onto the word processor instead of dictating, the amount of typing time you can cut down on makes the machine even more useful. Long words or terms that occur time and time again throughout the work can be replaced temporarily with abbreviations. If you dictate, all you need to say is 'Insert "so and so"', or whatever the identification of the repeat item is.In a long report on radiation protection, say, terms such as 'The National Swedish Institute of Radiation Protection's Code of Statutes or The Radiation Protection Act (1958:110)' may occur dozens of times. (Since I work primarily with Scandinavian languages, then the bias will naturally be towards such examples.) You will not want to dictate these lengthy expressions each time they occur. Neither will the person who is doing the typing be too happy to keep repeating them. Such items can be made into a temporary glossary in advance of producing the translation and can be accessed as and when required.Using the DFE you can compose a glossary of terms in a number of ways depending on how you want to use and store the work. The machine has several methods of storage, both volatile and non-volatile.First there are the terms that you want to store permanently as a glossary. I find it practical to have separate disks for main subject groups such as heat treatment, automobile engineering, corrosion, offshore engineering, software, tools, accounting terms or annual reports, and also for particular customers.One way to store items is as modules. A module is an independent unit of text which can be integrated with other text or merged with other modules to form larger units of text. Take a car maintenance manual as an example. A 24,000 mile service will include all the 12,000 mile service items which, in turn, will include the 6,000 mile service items. Making up schedules from a basic set of modules is much better than having to type the same instructions over and over again. The size of the module is limited only by practical considerations and the only essential limitation is the length of the name that identifies the module. This is restricted to 16 alphanumeric characters but is usually quite sufficient for normal identification. Modules are stored on a text disk, or working disk as it is sometimes called.If you find you want to store format commands or short pieces of text then you can use what is called a multicode. A multicode is identified by a single character, either alpha or numeric, and its storage capacity is limited to 240 characters. An advantage of a multicode is that the machine will allow it to be stored on the software disk or program disk. This is useful since you can then have standard format commands on the software disk for use in conjunction with any text disk you care to insert in the machine. Multicodes are volatile, so they need to be stored before the system is either reset or switched off. However, for most purposes, the use of modules is most practical.As I said, the item is identified by up to 16 alphanumeric characters which are enclosed by a format symbol at either end. If you enter too many characters, the system will tell you so when you ask it to format the work. It will display 'TOO MANY CHARACTERS' and the cursor will go to the 17th character. It will also remain silent unless you ignore the message for more than 10 seconds; it then gets annoyed and starts to bleep. This way you don't even have to be able to count to use a word processor. A single key stroke eliminates the excess characters and you can then continue.You can use the machine's print facilities to highlight the keyword or term by emboldening, underlining or tabulating. The way in which you produce the final copy is entirely a matter of personal choice.When you have entered all the information and terms you feel you need, the first draft of the glossary is ready for use. There may be terms which you are not quite sure about. Leave these for the time being. They may become apparent as you work through the text, or you may wish to consult a specialist. He may not come up with an answer immediately, and you may need to revise your glossary at a later date.You don't have to worry about putting the words or terms in alphabetical order in the glossary since the DFE does this automatically. It also tells you if you try to enter the same thing twice, by displaying 'NAME ALREADY PRESENT'. In fact the machine is very logical and stops you doing lots of silly things.You can then print your glossary out for later use when you are translating the main body of the text. Having a selective glossary which may be contained on one or two pages is far more convenient than having to leaf through pages in a dictionary. How many pages in a dictionary do you actually have to leaf through to find the word or expression you want? Or how many dictionaries, for that matter?You can print out the list of module names (which are sorted alphanumerically) or the contents of the modules (in the same order). For the sake of convenience you can have the glossary on one disk and the translation on another. In this way you do not have to restrict the capacity of the text disk by loading the glossary on to it.Individual terms can be accessed at random while producing the main body of text. How do you get the term you want? Simply press the key marked 'GET'. You will then be asked to identify the module you want. Once you have identified what you want you can press the 'next step' key which gets you back to the text again, and the module is automatically added to the text.The beauty of using a glossary composed on a word processor is that individual terms need only be typed once and can be used any number of times. The method also ensures consistency and saves the bother of trying to remember how you translated a term the last time it appeared. Any repetitive term that you cannot immediately fathom can be identified by a unique code such as ZYX. You can then get the machine to exchange this code for the correct term when you have found out what it should be. It is particularly useful when trying to remember the official names of government or public bodies. I can never remember whether the official translation of Kungliga Arbetarskyddsstyrelsen is The National Swedish Board of Occupational Safety and Health or The National Swedish Board of Occupational Health and Safety.The above procedure ensures permanent storage of the glossary. As you are working through the text you may decide that it might be a good idea to add this or that expression which may not have been obvious during the initial scan. The DFE has a volatile screen memory of about 1 kilobyte which corresponds to about one page of A4 text. This allows temporary storage without having to leave the text processing routine -useful for continuity. You can store the items as blocks of text and identify them with single alpha or numeric character codes; the screen memory will take 40 or more at a time. When the job is finished and before you switch the machine off, you can transfer the temporary storage to permanent storage by adding the contents of the screen memory to the permanently stored glossary compiled earlier.When you start proofreading you may decide to add to or amend your glossary. You can expand your glossary by simply entering more items. Original entries can be amended at will, and the machine gives you the choice of retaining the previous definition or deleting it as required. The glossary is then sorted automatically. You can enter the terms in any order, so you don't have to worry about whether terms are in the right place or not.If you discover that you have translated a word or words incorrectly, or the expert you consulted has come up with the correct expression, you can get the word processor to carry out a global exchange operation. It will search through an entire text and amend each occurrence of the word or words as it goes along. It's quite fascinating to watch this on-screen. The same sequence can be adopted to replace the terms you didn't know earlier -you remember, the one you called ZYX. Incidentally, calling an unknown word ZYX or something similar reduces the risk of forgetting it when you are proofreading.It is very useful to be able to compile glossaries for particular customers. Everybody has their own way of expressing what they want to say. Producing a glossary for your client is very helpful since it allows you both to agree on terminology from the outset. This is particularly useful if there are long gaps between jobs. How on earth do you remember what you said last time? It is possible to manage with a card system, but this means that you have to be meticulous in your record keeping and cards do get lost or damaged. (Isn't it nice just to press a button and get the right text?)The DFE is not restricted to working in one direction. If you wish to compile a multilanguage glossary you can use the data retrieval program. This allows you to compose your own data masks. A mask is essentially a page format which you design for a particular application. In its simplest form it is like squared paper, and each time you access it you are presented with a blank form on which you can then enter the different language texts in the different fields and instruct the machine to sort on whichever field you wish.Let us say, for example, that you wish to compile a glossary in French, German, English, Spanish, Italian and Danish. You can compose a mask and code each field according to language. You can then instruct the data retrieval system to sort on a particular field. Any one of the languages can be used as a source language with the remainder as target languages. You can get the system to produce a list with just two languages if you so wish. The way you format the glossary depends on your own personal choice and on the way in which the glossary is going to be used. Once you have constructed the basic glossary you can play around with it to your heart's content.I have learnt from experience that the odd bit of paper you had with some notes on it tends to get lost all too easily. A permanent record on disk which can be updated as required is a convenience I find hard to manage without. A disk can store several hundred pages of text, and items can be accessed very simply. Just imagine trying to access as rapidly from hard copy and then having to type it again.Reference literature often proves useful but is difficult to store and access unless you have a pet librarian. How often do you know that somewhere you have some information on a particular subject but can't lay your hands on it? You can compile a glossary of all the leaflets and pamphlets you collect, which you can then add to as required. Such information is then always up to date and in alphabetical order. Once you have the information sorted alphabetically by the word processor, it is easy to file away all the paper-work in boxes in the correct order.Frequently, too, you find that you need to look at a job you did several years ago, but have difficulty in finding it. It is possible to store copies of all work on paper. However, if your output as freelance translator is, say, 600,000 words a year, this can amount to 3,000 pages of text. Consider the cost of keeping hard copies in terms of space and what the actual paper costs. Depending on how you buy your paper, there is not a great deal of difference in the price of paper and the cost of disk space capable of storing the same volume of text. You can keep special disks for regular customers, and each disk will store about 200 to 500 pages of text depending on how it is formatted -just imagine what a difference in storage space this is. Letting the customer know that you take a particular interest in storing his work is a good selling point.Other examples of useful data for retrieval are standard layouts, updating of annual reports, official documents and forms, education certificates and many others. All these can be stored as data for retrieval and re-use.Sometimes you may come across difficult technical or legal expressions. It's not always possible to remember a neat way of translating these. Such items are ideal for storing as a glossary or list of terms.You can if you wish store all your information on a single disk and have an online dictionary, but then you get back to having to search through a large volume of information to get what you want. Keep things small and easy to handle.No two jobs are ever quite the same, and many are so individual that you feel that you will never be asked to do the same job again. Yet they turn up when you least expect them. Why waste all the effort you put into the original work when it can so easily be accessed and consulted?Data storage and retrieval can save considerable time, and not only in text processing. With all the invoices you send out, think of the convenience of having a permanent record on disk. Each time you invoice a particular company all you need to do is to change the amount charged. The date is automatically inserted by the machine, and the rest of the information such as address and rates tends to remain unchanged. Imagine the ease of being able to print out a copy of an invoice when faced with a seemingly surprised accountant who says 'Oh? We haven't received your invoice yet!' Make the word processor work for you. After all it does precisely what you tell it to do. Without arguing, and at any time of the day or night. © Geoffrey Samuelsson-Brown, 1985. AUTHOR Geoffrey Samuelsson-Brown, Transcript Translators, 76 Northcott, Bracknell, Berkshire, UK.
null
null
null
null
Main paper: : Perhaps a brief introduction about the way I work would be in order. I have two DFE System 50 word processors, each with twin 8-inch disk drives and printers. One of the machines is operated by a full-time secretary and the other is used for editing and administration. The majority of the work is dictated, though for convenience I type some small jobs on one of the machines. The disks have capacities of 286 and 580 kilobytes according to whether they are single or double density. They permit the storage of between 200 and 500 pages of text, depending on how the text is formatted.Why use a word processor to produce your own glossary if there are perfectly good glossaries available in print?Inevitably the glossaries which are commercially available contain a vast number of terms you already know -but, to your annoyance, they lack precisely the ones you can't find. Some, as you know, are not worth the money you spend on them -a situation you cannot always avoid when buying books from abroad without being able to see them first. This is where the compilation of your own glossaries and the word processor come into use. Another aspect is that particular companies have their own ideas on terminology. What is acceptable to one company may not be Tools for the Trade, V. Lawson (ed.) . © Geoffrey Samuelson-Brown, 1985. acceptable to another even though both words may mean the same thing. Don't be frightened of asking the client for any information. Unless he is dealing with the market concerned for the first time he is more than likely to have some background material which, though it might not say much to him, may provide some vital clues to you.When you have all the material ready for a job then you can start with your initial glossary compilation.The purpose of having a word processor is to cut down on work. If you work directly onto the word processor instead of dictating, the amount of typing time you can cut down on makes the machine even more useful. Long words or terms that occur time and time again throughout the work can be replaced temporarily with abbreviations. If you dictate, all you need to say is 'Insert "so and so"', or whatever the identification of the repeat item is.In a long report on radiation protection, say, terms such as 'The National Swedish Institute of Radiation Protection's Code of Statutes or The Radiation Protection Act (1958:110)' may occur dozens of times. (Since I work primarily with Scandinavian languages, then the bias will naturally be towards such examples.) You will not want to dictate these lengthy expressions each time they occur. Neither will the person who is doing the typing be too happy to keep repeating them. Such items can be made into a temporary glossary in advance of producing the translation and can be accessed as and when required.Using the DFE you can compose a glossary of terms in a number of ways depending on how you want to use and store the work. The machine has several methods of storage, both volatile and non-volatile.First there are the terms that you want to store permanently as a glossary. I find it practical to have separate disks for main subject groups such as heat treatment, automobile engineering, corrosion, offshore engineering, software, tools, accounting terms or annual reports, and also for particular customers.One way to store items is as modules. A module is an independent unit of text which can be integrated with other text or merged with other modules to form larger units of text. Take a car maintenance manual as an example. A 24,000 mile service will include all the 12,000 mile service items which, in turn, will include the 6,000 mile service items. Making up schedules from a basic set of modules is much better than having to type the same instructions over and over again. The size of the module is limited only by practical considerations and the only essential limitation is the length of the name that identifies the module. This is restricted to 16 alphanumeric characters but is usually quite sufficient for normal identification. Modules are stored on a text disk, or working disk as it is sometimes called.If you find you want to store format commands or short pieces of text then you can use what is called a multicode. A multicode is identified by a single character, either alpha or numeric, and its storage capacity is limited to 240 characters. An advantage of a multicode is that the machine will allow it to be stored on the software disk or program disk. This is useful since you can then have standard format commands on the software disk for use in conjunction with any text disk you care to insert in the machine. Multicodes are volatile, so they need to be stored before the system is either reset or switched off. However, for most purposes, the use of modules is most practical.As I said, the item is identified by up to 16 alphanumeric characters which are enclosed by a format symbol at either end. If you enter too many characters, the system will tell you so when you ask it to format the work. It will display 'TOO MANY CHARACTERS' and the cursor will go to the 17th character. It will also remain silent unless you ignore the message for more than 10 seconds; it then gets annoyed and starts to bleep. This way you don't even have to be able to count to use a word processor. A single key stroke eliminates the excess characters and you can then continue.You can use the machine's print facilities to highlight the keyword or term by emboldening, underlining or tabulating. The way in which you produce the final copy is entirely a matter of personal choice.When you have entered all the information and terms you feel you need, the first draft of the glossary is ready for use. There may be terms which you are not quite sure about. Leave these for the time being. They may become apparent as you work through the text, or you may wish to consult a specialist. He may not come up with an answer immediately, and you may need to revise your glossary at a later date.You don't have to worry about putting the words or terms in alphabetical order in the glossary since the DFE does this automatically. It also tells you if you try to enter the same thing twice, by displaying 'NAME ALREADY PRESENT'. In fact the machine is very logical and stops you doing lots of silly things.You can then print your glossary out for later use when you are translating the main body of the text. Having a selective glossary which may be contained on one or two pages is far more convenient than having to leaf through pages in a dictionary. How many pages in a dictionary do you actually have to leaf through to find the word or expression you want? Or how many dictionaries, for that matter?You can print out the list of module names (which are sorted alphanumerically) or the contents of the modules (in the same order). For the sake of convenience you can have the glossary on one disk and the translation on another. In this way you do not have to restrict the capacity of the text disk by loading the glossary on to it.Individual terms can be accessed at random while producing the main body of text. How do you get the term you want? Simply press the key marked 'GET'. You will then be asked to identify the module you want. Once you have identified what you want you can press the 'next step' key which gets you back to the text again, and the module is automatically added to the text.The beauty of using a glossary composed on a word processor is that individual terms need only be typed once and can be used any number of times. The method also ensures consistency and saves the bother of trying to remember how you translated a term the last time it appeared. Any repetitive term that you cannot immediately fathom can be identified by a unique code such as ZYX. You can then get the machine to exchange this code for the correct term when you have found out what it should be. It is particularly useful when trying to remember the official names of government or public bodies. I can never remember whether the official translation of Kungliga Arbetarskyddsstyrelsen is The National Swedish Board of Occupational Safety and Health or The National Swedish Board of Occupational Health and Safety.The above procedure ensures permanent storage of the glossary. As you are working through the text you may decide that it might be a good idea to add this or that expression which may not have been obvious during the initial scan. The DFE has a volatile screen memory of about 1 kilobyte which corresponds to about one page of A4 text. This allows temporary storage without having to leave the text processing routine -useful for continuity. You can store the items as blocks of text and identify them with single alpha or numeric character codes; the screen memory will take 40 or more at a time. When the job is finished and before you switch the machine off, you can transfer the temporary storage to permanent storage by adding the contents of the screen memory to the permanently stored glossary compiled earlier.When you start proofreading you may decide to add to or amend your glossary. You can expand your glossary by simply entering more items. Original entries can be amended at will, and the machine gives you the choice of retaining the previous definition or deleting it as required. The glossary is then sorted automatically. You can enter the terms in any order, so you don't have to worry about whether terms are in the right place or not.If you discover that you have translated a word or words incorrectly, or the expert you consulted has come up with the correct expression, you can get the word processor to carry out a global exchange operation. It will search through an entire text and amend each occurrence of the word or words as it goes along. It's quite fascinating to watch this on-screen. The same sequence can be adopted to replace the terms you didn't know earlier -you remember, the one you called ZYX. Incidentally, calling an unknown word ZYX or something similar reduces the risk of forgetting it when you are proofreading.It is very useful to be able to compile glossaries for particular customers. Everybody has their own way of expressing what they want to say. Producing a glossary for your client is very helpful since it allows you both to agree on terminology from the outset. This is particularly useful if there are long gaps between jobs. How on earth do you remember what you said last time? It is possible to manage with a card system, but this means that you have to be meticulous in your record keeping and cards do get lost or damaged. (Isn't it nice just to press a button and get the right text?)The DFE is not restricted to working in one direction. If you wish to compile a multilanguage glossary you can use the data retrieval program. This allows you to compose your own data masks. A mask is essentially a page format which you design for a particular application. In its simplest form it is like squared paper, and each time you access it you are presented with a blank form on which you can then enter the different language texts in the different fields and instruct the machine to sort on whichever field you wish.Let us say, for example, that you wish to compile a glossary in French, German, English, Spanish, Italian and Danish. You can compose a mask and code each field according to language. You can then instruct the data retrieval system to sort on a particular field. Any one of the languages can be used as a source language with the remainder as target languages. You can get the system to produce a list with just two languages if you so wish. The way you format the glossary depends on your own personal choice and on the way in which the glossary is going to be used. Once you have constructed the basic glossary you can play around with it to your heart's content.I have learnt from experience that the odd bit of paper you had with some notes on it tends to get lost all too easily. A permanent record on disk which can be updated as required is a convenience I find hard to manage without. A disk can store several hundred pages of text, and items can be accessed very simply. Just imagine trying to access as rapidly from hard copy and then having to type it again.Reference literature often proves useful but is difficult to store and access unless you have a pet librarian. How often do you know that somewhere you have some information on a particular subject but can't lay your hands on it? You can compile a glossary of all the leaflets and pamphlets you collect, which you can then add to as required. Such information is then always up to date and in alphabetical order. Once you have the information sorted alphabetically by the word processor, it is easy to file away all the paper-work in boxes in the correct order.Frequently, too, you find that you need to look at a job you did several years ago, but have difficulty in finding it. It is possible to store copies of all work on paper. However, if your output as freelance translator is, say, 600,000 words a year, this can amount to 3,000 pages of text. Consider the cost of keeping hard copies in terms of space and what the actual paper costs. Depending on how you buy your paper, there is not a great deal of difference in the price of paper and the cost of disk space capable of storing the same volume of text. You can keep special disks for regular customers, and each disk will store about 200 to 500 pages of text depending on how it is formatted -just imagine what a difference in storage space this is. Letting the customer know that you take a particular interest in storing his work is a good selling point.Other examples of useful data for retrieval are standard layouts, updating of annual reports, official documents and forms, education certificates and many others. All these can be stored as data for retrieval and re-use.Sometimes you may come across difficult technical or legal expressions. It's not always possible to remember a neat way of translating these. Such items are ideal for storing as a glossary or list of terms.You can if you wish store all your information on a single disk and have an online dictionary, but then you get back to having to search through a large volume of information to get what you want. Keep things small and easy to handle.No two jobs are ever quite the same, and many are so individual that you feel that you will never be asked to do the same job again. Yet they turn up when you least expect them. Why waste all the effort you put into the original work when it can so easily be accessed and consulted?Data storage and retrieval can save considerable time, and not only in text processing. With all the invoices you send out, think of the convenience of having a permanent record on disk. Each time you invoice a particular company all you need to do is to change the amount charged. The date is automatically inserted by the machine, and the rest of the information such as address and rates tends to remain unchanged. Imagine the ease of being able to print out a copy of an invoice when faced with a seemingly surprised accountant who says 'Oh? We haven't received your invoice yet!' Make the word processor work for you. After all it does precisely what you tell it to do. Without arguing, and at any time of the day or night. © Geoffrey Samuelsson-Brown, 1985. AUTHOR Geoffrey Samuelsson-Brown, Transcript Translators, 76 Northcott, Bracknell, Berkshire, UK. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
495
0
null
null
null
null
null
null
null
null
e1f11a97739077da6b1b044d67b8f129b2220952
237295792
null
Rapid post-editing of Systran
Experimental use of Systran French-English translation at the CEC has indicated that currently the most promising application of this MT system is for rapid, reduced-quality translations; in this case changes to Systran raw output are restricted to the absolute minimum. The reactions of translators and users to this type of work are described.
{ "name": [ "Wagner, Elizabeth" ], "affiliation": [ null ] }
null
null
Proceedings of Translating and the Computer 5: Tools for the trade
1983-11-01
0
13
null
Although we use Systran at the Commission, and have been doing so for several years, I think it is necessary to point out that it occupies only a very small place in our overall workload, both at individual level and in the translation services. Some of the translation divisions cannot use Systran at all, because of the restricted number of language pairs it covers.At present, the European Community has ten Member States and seven official languages -Danish, Dutch, English, French, German, Greek and Italian. The number of possible translation directions, or language pairs, is 42 (see Figure 1 ).The Systran machine translation system now in use at the Commission in Luxembourg works in three of these language pairs -English into French, English into Italian, and French into English -and is used in the French, Italian and English Divisions for part of their work in those language pairs. For all the other divisions, and all other language pairs, human translation is the only option. So there is no question of imposing universal machine translation at the Commission -Systran can only cope with part of our work.Tools for the Trade. V. Lawson (ed.) . © Aslib and Elizabeth Wagner.In the English Division in Luxembourg, Systran was first introduced for French-to-English translation in 1981, and has been used in various pilot schemes, so we are now well acquainted with the system and its uses and limitations. In 1982 the translation workload of the English Division was composed of the following: translation from French 45 per cent, German 31 per cent, Dutch 9 per cent, Italian 8 per cent, Danish 4 per cent and Greek and other languages 3 per cent (see Figure 2 ). You will see that translation from French into English accounts for the largest share of our work; at the Commission in Brussels, and in other Community institutions, the percentage of French-to-English is even higher. That is of course the reason why the Commission chose to develop Systran in this language pair. Of the French translated in the English Division in 1982, 11 per cent was translated by Systran with either full or rapid post-editing; this is equivalent to 5 per cent of our total translation workload.From now on I shall be referring specifically to the English Translation Division in Luxembourg, since this is the one where rapid post-editing has been most extensively developed. As yet there has been very little demand for rapid post-editing in the other divisions using Systran (French and Italian).The introduction of Systran to the English Division was very skilfully handled to minimise translators' resistance and allay their fears. Throughout the various experiments conducted, the stress has been on voluntary participation, and different types of assessment forms have been prepared to record translators' reactions and comments. Our colleagues on the Systran Development Team, who are themselves professional translators, have always been extremely helpful and patient in dealing with translators' criticisms and feedback.In the first phase, to show translators what the system was like, selected French texts were machine translated into English and sent to the translators together with the original; the translators were then free to choose whether to post-edit the raw MT, use it as a basis for dictation, or ignore it completely and translate in the conventional waydictation and correction of typescript. At this stage the aim was to produce a translation of normal quality, which would then be revised. Translators had no access to word processing facilities, and post-editing had to be done by manuscript correction of hard copy. 2. Leaving the choice to the translator Once the translation staff had gained some experience with Systran, the next logical step was to leave it to translators to request a Systran translation of any French text if they thought it would be useful as the basis for a normal-quality translation, which would then be revised in the normal way. This system is still in use, and we now have limited access to word processors and the chance to work on-screen, which is obviously the most efficient way of dealing with raw MT. However, there are very few translators who do choose to post-edit a raw Systran translation rather than translate in the conventional way, as the majority of them feel that Systran does not help them to produce faster or better translations.Many, but not all, translators decided, after the first phase of the MT experiment, that Systran was not a translation aid, because they found that it took too long, and was too tedious, to convert raw MT into a translation 'to which they would be prepared to put their name'. Translators are not purists or perfectionists, but they see language as a means of communication, and they are painfully aware that communication can be impeded by a bad translation. Their work is constantly criticised, first of all by revisers, and then in many cases by translation users, and as a result they become hypersensitive to language. Translators are always aware of the reviser's red pen hovering over every word they write, and they are conscious that they have a reputation to maintain.We therefore decided to use Systran in a different way -to provide a faster translation service for those translation users who wanted it, and were willing to accept lower-quality translation. The basic idea of rapid post-editing is to restrict post-editing to an absolute minimum but to maintain comprehensibility and reasonable accuracy. These texts are never revised, and word processors are used as extensively as possible. The decision whether or not to use this faster service (and therefore Systran) lies with the translation user, not the translator, and the user is warned that the translation will be of lower quality (i.e. will possibly contain inaccuracies, grammatical mistakes and unclear turns of phrase). The project was explained to a selected group of translation users at the Commission, who were given samples of the sort of end-product they were likely to receive. When the project was presented to the translation staff it was well received, and thirteen out of thirty-five volunteered to do this kind of work, on the understanding that they could opt out if they did not enjoy it.The project started up in May 1982, and last year 11 per cent of our French-to-English workload was translated by Systran, 9 per cent with full post-editing and 2 per cent with rapid post-editing. This year, up to August, the figures were 4 per cent rapid post-editing and 12 per cent full post-editing, making a Systran total of 16 per cent of our French-to-English translation (see Figure 3 ).It was interesting to note that the thirteen translators who volunteered to do rapid post-editing were all experienced staff, including three revisers; a certain amount of confidence in one's own translation ability and technical expertise is essential for this type of work. Just because rapid post-editing yields lower-quality translation, it should not be assumed that it can be undertaken by inexperienced staff. In fact it is quite the reverse -unless the post-editor has a high level of linguistic and technical knowledge he will not be able to post-edit the raw output to a reasonable standard in the recommended time.The Commission departments we serve in Luxembourg all deal with fairly technical subjects -medicine, industrial safety, coal and steel, statistics, finance, nuclear safeguards and information science -and to enable translators to cope more efficiently with this wide range of subject-matter, each translation division has a system of specialised groups. In the English Division translators are divided into four groups: Economics and Finance, Technology, Information and Publications, and Social Affairs. There were volunteers for rapid post-editing from all four groups, but in fact virtually all the demand for rapid post-editing has been from translation users served by the Technology Group, and this is why some of us now have extensive experience of rapid post-editing, while others have not yet had any.Apart from an excellent knowledge of the source language (in our case French) and of the technical terminology of the subject-matter, post-editors should ideally have expertise on the word processor. Our system is a Wang OIS 130 and everyone in the English Division who has tried it is very enthusiastic about working on-screen. We have found that translators are not at all 'afraid of computers', as is sometimes claimed -in fact both the word processing equipment and the Eurodicautom terminology data bank were very quickly accepted as genuine machine aids to translation.Our word processors are under such pressure that some post-editors still have to correct raw output by hand, but this can defeat the object of the exercise, which is to provide a rapid service for the user.The main criterion in this type of work is speed. As a guide, post-editors were advised never to spend more than half an hour on any one page. When working on-screen it is possible to rapid-post-edit raw MT at a rate of four pages per hour. But this figure should be handled with care. Although we can and do process 40-page texts in two days, it is extremely unlikely that any translator would be willing or able to post-edit 160 pages in a forty-hour week. In our Division it is rarely possible to work on the word processor for more than four hours at a time, partly because it is not available, but even if it were, I think it would be difficult to maintain the required level of concentration for a longer period.As regards the density of post-editing, it is difficult to lay down rules, as the number of corrections will depend on the individual post-editor's preferences and the quality of the raw MT, which can vary considerably. There has been a general improvement in raw output on the basis of feedback from translators, but a certain amount of time always has to be spent eliminating simple mistakes (pronouns, prepositions, possessive adjectives, etc.) in order to make the text intelligible.An example of Systran raw output (in Figure 4) is shown on page 208. This is what we start with, and I personally find the best approach is to treat the whole thing like a game of Scrabble. I say to myself: 'Well, these are the words I've got -how can I rearrange them, with minimal changes, into something roughly approximating the meaning of the original French text?'Like this example, most of the rapid post-editing we do is for the translation of minutes. These are always written in the present tense in French, but must be written in reported speech in English. The tense conversion is carried out automatically by a Systran sub-routine. This example Apart from the common simple errors -simple in that they are easy to correct, but must nevertheless be corrected if the text is to be intelligible -there are always a certain number of errors due to mistakes in input of the source language. The input typing must be of extremely high quality, by native speakers of the source language if possible, as Systran cannot forgive a single error. Even a mistake in accentuation, or typing qu"on instead of qu'on, will lead to a not-found word which can affect Systran's syntax analysis. Mistakes in capitalisation can be serious too: in this type of text 'Commission' equals 'Commission' but 'commission' equals 'Subcommittee'. Although the number of corrections may seem high, most of the changes to the above text are straightforward corrections of simple mistakes, which can be carried out very quickly. Rapid post-editing becomes more difficult and time-consuming when the language of the original is more colourful and 'natural', as I shall now demonstrate. These minutes are written in French, regardless of the language actually used by the speaker at the meeting -in the first passage shown, the Chairman was speaking in German and so the language he used had already been 'pre-translated' by the French minute-writers, and any non-transferable German idioms will have been paraphrased, thus making this passage more suitable for machine translation.A second example ( Figure 5 ) is taken from the same set of minutes, but since it summarises a speech by a French speaker, the idioms have been reproduced, not paraphrased, and the language is generally more colourful. This is more difficult to post-edit, as it calls for genuine retranslation rather than straightforward correction.Changes can be made very rapidly using the word processor, and we have developed a number of automatic text-processing functions for post-editing, tailored to cope with Systran's most common mistakes. These can be used to improve layout, insert the correct titles of various organisations and committees, reverse words or rearrange them in other ways, and convert a phrase such as 'equipment of the office' to 'office equipment' with two keystrokes. The global change and search facilities also help to speed up work considerably. To save time, one can flag doubtful passages or terms as one goes through the raw MT, and then go back to them later, after doing some research.With rapid post-editing, there is little time for research on terminology and background documents, and this is the main reason why the post-editors have to be experienced staff. But the texts for which rapid post-editing is most commonly requested (the minutes mentioned above) are always well documented and contain few serious problems of terminology. More complicated texts, for example coal and steel research reports, tend to fare very badly when machine translated, for two reasons. One is that they are difficult anyway -the subject-matter is too new to be covered by multilingual dictionaries or even by our standard works of reference such as Kempe's Engineers' Year-Book and periodicals such as Steel Times and Colliery Guardian. The other reason isand for many translators this is Systran's main drawbackthat machine translation is unreliable. If the translator, or in this case the post-editor, is not sure of the meaning or the correct translation, he must ascertain it -by consulting colleagues, or libraries, or the Terminology Bureau, or even the author of the original text, all of which takes time. Only then can he judge whether the Systran translation is correct. In other words he cannot trust Systran to have got it right, and anyone who has any experience of technical translation will understand that it would be unreasonable to expect a machine to do so. So the various components of time spent on post-editing raw MT are as follows: correction of simple mistakes (pronouns, prepositions, etc.), correction of major mistakes (sometimes rewriting of whole sentences), research, and 'decision time' -in this case, deciding whether the raw MT needs to be corrected or is acceptable as it stands (see Figure 6 ).None of the original volunteers for this project has opted out, and on their assessment forms the post-editors usually rate rapid post-editing as 'an interesting challenge' or 'an acceptable piece of work'. It introduces a certain amount of variety into our work, and of course a welcome degree of independence, as this work is never revised. If there is a genuine requirement for fast, reduced-quality translation, the staff who volunteered for this project are willing to provide it, on condition that every translation is clearly marked 'rapid-post-edited Systran machine translation'.Some concern has been expressed about the possible danger of general translation standards being lowered by exposure to MT. At present the volume of rapid-postediting work is so small that this danger is non-existent. Even with more extensive exposure to MT the effect would be very difficult to assess objectively, as there are so many influences which can affect translation standards, not the least of which, in our case, is the fact that we live outside an English-speaking environment. In any case we are not 'translating into a void'; we rely on our users to tell us if what we are producing is unacceptable.As explained above, rapid post-editing is carried out only when users specifically request it and are prepared to accept a lower-quality translation. In other words the volume of this type of work is determined by the users, not the translators. At the Commission, the users of rapid-post-edited MT are very enthusiastic about the product, and particularly about the advantages of text processing facilities. There has never been any criticism of the lower translation quality. But the situation at present is that only a very small number of translation users ask for this service. There are several possible reasons for this:(a) because the service is quite new, and is only available in the three language pairs covered by Systran;(b) because the demand for this 'information-scanning' type of translation is relatively low at the Commission, where most translation users have a reasonable knowledge of French and English, the source languages covered by our version of Systran;(c) because of the translation user's function at the Commission. Although I have referred throughout this paper to translation 'users', the term we use at the Commission is 'requesters'.The distinction is important. In many cases the translation requester is a sort of middleman: he does not need the translation for his own personal use, as he may be the author of the document, or may be perfectly capable of understanding the source language, but he has to distribute translations of working documents and minutes to national representatives on the committee or working party for which he is responsible.These committee members are the real translation users -the people who depend on us to provide an accurate translation of a 'foreign' text to help them in their committee's work. So the translation requester at the Commission may not be sure whether a rapid-post-edited translation will be acceptable to the end-users. And in cases where the requester does not know the target language well, and cannot judge the quality of the translation for himself, he will not want to risk sending out a lower-quality translation to his committee members. Ideally, the RPE requester should be a native speaker of the target language, and thus able to judge whether lower-quality translation is acceptable, or possibly to correct the terminology or improve the style himself.Some people might ask why it is not possible to spend longer on post-editing and do a 'proper', rather than a 'rapid', job. This is a perfectly reasonable question, and indeed the small number of translators who request Systran translations themselves do exactly that, i.e. use Systran as a machine aid. However, it raises the central problem of MT acceptability for translators. Many feel that Systran is not an aid, but a hindrance, because it limits their freedom of expression. Translating is a creative job -the translator uses (a) his understanding of the source language to determine the meaning of a text and (b) his command of the target language to express that meaning, by creating a correct, faithful translation. If his range of expression is restricted in any way (cf. my analogy with Scrabble) he will not be able to express that meaning so well. After all, the fundamental limitation of Systran is that it translates the words, not the meaning.To sum up, these are the basic requirements for a rapid-post-editing service in an organisation such as ours:Translation users who are willing to accept a lower quality of translation, ideally native speakers of the target language;Technical back-up: in addition to the Systran MT facilities, adequate word processing facilities and excellent typists for input of the SL text;Post-editors who have extensive translation experience in the appropriate language pair and subject field, are willing to work on-screen, and are able to adapt their translation standards to the user's requirements.Elizabeth Wagner, English Translation Division, Commission of the European Communities, Bâtiment Jean Monnet, BP 1907, Plateau du Kirchberg, Luxembourg GD.
null
null
null
null
Main paper: introducing mt to translators: In the first phase, to show translators what the system was like, selected French texts were machine translated into English and sent to the translators together with the original; the translators were then free to choose whether to post-edit the raw MT, use it as a basis for dictation, or ignore it completely and translate in the conventional waydictation and correction of typescript. At this stage the aim was to produce a translation of normal quality, which would then be revised. Translators had no access to word processing facilities, and post-editing had to be done by manuscript correction of hard copy. 2. Leaving the choice to the translator Once the translation staff had gained some experience with Systran, the next logical step was to leave it to translators to request a Systran translation of any French text if they thought it would be useful as the basis for a normal-quality translation, which would then be revised in the normal way. This system is still in use, and we now have limited access to word processors and the chance to work on-screen, which is obviously the most efficient way of dealing with raw MT. However, there are very few translators who do choose to post-edit a raw Systran translation rather than translate in the conventional way, as the majority of them feel that Systran does not help them to produce faster or better translations.Many, but not all, translators decided, after the first phase of the MT experiment, that Systran was not a translation aid, because they found that it took too long, and was too tedious, to convert raw MT into a translation 'to which they would be prepared to put their name'. Translators are not purists or perfectionists, but they see language as a means of communication, and they are painfully aware that communication can be impeded by a bad translation. Their work is constantly criticised, first of all by revisers, and then in many cases by translation users, and as a result they become hypersensitive to language. Translators are always aware of the reviser's red pen hovering over every word they write, and they are conscious that they have a reputation to maintain. leaving the choice to the translation user: We therefore decided to use Systran in a different way -to provide a faster translation service for those translation users who wanted it, and were willing to accept lower-quality translation. The basic idea of rapid post-editing is to restrict post-editing to an absolute minimum but to maintain comprehensibility and reasonable accuracy. These texts are never revised, and word processors are used as extensively as possible. The decision whether or not to use this faster service (and therefore Systran) lies with the translation user, not the translator, and the user is warned that the translation will be of lower quality (i.e. will possibly contain inaccuracies, grammatical mistakes and unclear turns of phrase). The project was explained to a selected group of translation users at the Commission, who were given samples of the sort of end-product they were likely to receive. When the project was presented to the translation staff it was well received, and thirteen out of thirty-five volunteered to do this kind of work, on the understanding that they could opt out if they did not enjoy it.The project started up in May 1982, and last year 11 per cent of our French-to-English workload was translated by Systran, 9 per cent with full post-editing and 2 per cent with rapid post-editing. This year, up to August, the figures were 4 per cent rapid post-editing and 12 per cent full post-editing, making a Systran total of 16 per cent of our French-to-English translation (see Figure 3 ).It was interesting to note that the thirteen translators who volunteered to do rapid post-editing were all experienced staff, including three revisers; a certain amount of confidence in one's own translation ability and technical expertise is essential for this type of work. Just because rapid post-editing yields lower-quality translation, it should not be assumed that it can be undertaken by inexperienced staff. In fact it is quite the reverse -unless the post-editor has a high level of linguistic and technical knowledge he will not be able to post-edit the raw output to a reasonable standard in the recommended time.The Commission departments we serve in Luxembourg all deal with fairly technical subjects -medicine, industrial safety, coal and steel, statistics, finance, nuclear safeguards and information science -and to enable translators to cope more efficiently with this wide range of subject-matter, each translation division has a system of specialised groups. In the English Division translators are divided into four groups: Economics and Finance, Technology, Information and Publications, and Social Affairs. There were volunteers for rapid post-editing from all four groups, but in fact virtually all the demand for rapid post-editing has been from translation users served by the Technology Group, and this is why some of us now have extensive experience of rapid post-editing, while others have not yet had any.Apart from an excellent knowledge of the source language (in our case French) and of the technical terminology of the subject-matter, post-editors should ideally have expertise on the word processor. Our system is a Wang OIS 130 and everyone in the English Division who has tried it is very enthusiastic about working on-screen. We have found that translators are not at all 'afraid of computers', as is sometimes claimed -in fact both the word processing equipment and the Eurodicautom terminology data bank were very quickly accepted as genuine machine aids to translation.Our word processors are under such pressure that some post-editors still have to correct raw output by hand, but this can defeat the object of the exercise, which is to provide a rapid service for the user.The main criterion in this type of work is speed. As a guide, post-editors were advised never to spend more than half an hour on any one page. When working on-screen it is possible to rapid-post-edit raw MT at a rate of four pages per hour. But this figure should be handled with care. Although we can and do process 40-page texts in two days, it is extremely unlikely that any translator would be willing or able to post-edit 160 pages in a forty-hour week. In our Division it is rarely possible to work on the word processor for more than four hours at a time, partly because it is not available, but even if it were, I think it would be difficult to maintain the required level of concentration for a longer period.As regards the density of post-editing, it is difficult to lay down rules, as the number of corrections will depend on the individual post-editor's preferences and the quality of the raw MT, which can vary considerably. There has been a general improvement in raw output on the basis of feedback from translators, but a certain amount of time always has to be spent eliminating simple mistakes (pronouns, prepositions, possessive adjectives, etc.) in order to make the text intelligible.An example of Systran raw output (in Figure 4) is shown on page 208. This is what we start with, and I personally find the best approach is to treat the whole thing like a game of Scrabble. I say to myself: 'Well, these are the words I've got -how can I rearrange them, with minimal changes, into something roughly approximating the meaning of the original French text?'Like this example, most of the rapid post-editing we do is for the translation of minutes. These are always written in the present tense in French, but must be written in reported speech in English. The tense conversion is carried out automatically by a Systran sub-routine. This example Apart from the common simple errors -simple in that they are easy to correct, but must nevertheless be corrected if the text is to be intelligible -there are always a certain number of errors due to mistakes in input of the source language. The input typing must be of extremely high quality, by native speakers of the source language if possible, as Systran cannot forgive a single error. Even a mistake in accentuation, or typing qu"on instead of qu'on, will lead to a not-found word which can affect Systran's syntax analysis. Mistakes in capitalisation can be serious too: in this type of text 'Commission' equals 'Commission' but 'commission' equals 'Subcommittee'. Although the number of corrections may seem high, most of the changes to the above text are straightforward corrections of simple mistakes, which can be carried out very quickly. Rapid post-editing becomes more difficult and time-consuming when the language of the original is more colourful and 'natural', as I shall now demonstrate. These minutes are written in French, regardless of the language actually used by the speaker at the meeting -in the first passage shown, the Chairman was speaking in German and so the language he used had already been 'pre-translated' by the French minute-writers, and any non-transferable German idioms will have been paraphrased, thus making this passage more suitable for machine translation.A second example ( Figure 5 ) is taken from the same set of minutes, but since it summarises a speech by a French speaker, the idioms have been reproduced, not paraphrased, and the language is generally more colourful. This is more difficult to post-edit, as it calls for genuine retranslation rather than straightforward correction.Changes can be made very rapidly using the word processor, and we have developed a number of automatic text-processing functions for post-editing, tailored to cope with Systran's most common mistakes. These can be used to improve layout, insert the correct titles of various organisations and committees, reverse words or rearrange them in other ways, and convert a phrase such as 'equipment of the office' to 'office equipment' with two keystrokes. The global change and search facilities also help to speed up work considerably. To save time, one can flag doubtful passages or terms as one goes through the raw MT, and then go back to them later, after doing some research.With rapid post-editing, there is little time for research on terminology and background documents, and this is the main reason why the post-editors have to be experienced staff. But the texts for which rapid post-editing is most commonly requested (the minutes mentioned above) are always well documented and contain few serious problems of terminology. More complicated texts, for example coal and steel research reports, tend to fare very badly when machine translated, for two reasons. One is that they are difficult anyway -the subject-matter is too new to be covered by multilingual dictionaries or even by our standard works of reference such as Kempe's Engineers' Year-Book and periodicals such as Steel Times and Colliery Guardian. The other reason isand for many translators this is Systran's main drawbackthat machine translation is unreliable. If the translator, or in this case the post-editor, is not sure of the meaning or the correct translation, he must ascertain it -by consulting colleagues, or libraries, or the Terminology Bureau, or even the author of the original text, all of which takes time. Only then can he judge whether the Systran translation is correct. In other words he cannot trust Systran to have got it right, and anyone who has any experience of technical translation will understand that it would be unreasonable to expect a machine to do so. So the various components of time spent on post-editing raw MT are as follows: correction of simple mistakes (pronouns, prepositions, etc.), correction of major mistakes (sometimes rewriting of whole sentences), research, and 'decision time' -in this case, deciding whether the raw MT needs to be corrected or is acceptable as it stands (see Figure 6 ).None of the original volunteers for this project has opted out, and on their assessment forms the post-editors usually rate rapid post-editing as 'an interesting challenge' or 'an acceptable piece of work'. It introduces a certain amount of variety into our work, and of course a welcome degree of independence, as this work is never revised. If there is a genuine requirement for fast, reduced-quality translation, the staff who volunteered for this project are willing to provide it, on condition that every translation is clearly marked 'rapid-post-edited Systran machine translation'.Some concern has been expressed about the possible danger of general translation standards being lowered by exposure to MT. At present the volume of rapid-postediting work is so small that this danger is non-existent. Even with more extensive exposure to MT the effect would be very difficult to assess objectively, as there are so many influences which can affect translation standards, not the least of which, in our case, is the fact that we live outside an English-speaking environment. In any case we are not 'translating into a void'; we rely on our users to tell us if what we are producing is unacceptable.As explained above, rapid post-editing is carried out only when users specifically request it and are prepared to accept a lower-quality translation. In other words the volume of this type of work is determined by the users, not the translators. At the Commission, the users of rapid-post-edited MT are very enthusiastic about the product, and particularly about the advantages of text processing facilities. There has never been any criticism of the lower translation quality. But the situation at present is that only a very small number of translation users ask for this service. There are several possible reasons for this:(a) because the service is quite new, and is only available in the three language pairs covered by Systran;(b) because the demand for this 'information-scanning' type of translation is relatively low at the Commission, where most translation users have a reasonable knowledge of French and English, the source languages covered by our version of Systran;(c) because of the translation user's function at the Commission. Although I have referred throughout this paper to translation 'users', the term we use at the Commission is 'requesters'.The distinction is important. In many cases the translation requester is a sort of middleman: he does not need the translation for his own personal use, as he may be the author of the document, or may be perfectly capable of understanding the source language, but he has to distribute translations of working documents and minutes to national representatives on the committee or working party for which he is responsible.These committee members are the real translation users -the people who depend on us to provide an accurate translation of a 'foreign' text to help them in their committee's work. So the translation requester at the Commission may not be sure whether a rapid-post-edited translation will be acceptable to the end-users. And in cases where the requester does not know the target language well, and cannot judge the quality of the translation for himself, he will not want to risk sending out a lower-quality translation to his committee members. Ideally, the RPE requester should be a native speaker of the target language, and thus able to judge whether lower-quality translation is acceptable, or possibly to correct the terminology or improve the style himself.Some people might ask why it is not possible to spend longer on post-editing and do a 'proper', rather than a 'rapid', job. This is a perfectly reasonable question, and indeed the small number of translators who request Systran translations themselves do exactly that, i.e. use Systran as a machine aid. However, it raises the central problem of MT acceptability for translators. Many feel that Systran is not an aid, but a hindrance, because it limits their freedom of expression. Translating is a creative job -the translator uses (a) his understanding of the source language to determine the meaning of a text and (b) his command of the target language to express that meaning, by creating a correct, faithful translation. If his range of expression is restricted in any way (cf. my analogy with Scrabble) he will not be able to express that meaning so well. After all, the fundamental limitation of Systran is that it translates the words, not the meaning.To sum up, these are the basic requirements for a rapid-post-editing service in an organisation such as ours:Translation users who are willing to accept a lower quality of translation, ideally native speakers of the target language;Technical back-up: in addition to the Systran MT facilities, adequate word processing facilities and excellent typists for input of the SL text;Post-editors who have extensive translation experience in the appropriate language pair and subject field, are willing to work on-screen, and are able to adapt their translation standards to the user's requirements.Elizabeth Wagner, English Translation Division, Commission of the European Communities, Bâtiment Jean Monnet, BP 1907, Plateau du Kirchberg, Luxembourg GD. working with systran: Although we use Systran at the Commission, and have been doing so for several years, I think it is necessary to point out that it occupies only a very small place in our overall workload, both at individual level and in the translation services. Some of the translation divisions cannot use Systran at all, because of the restricted number of language pairs it covers.At present, the European Community has ten Member States and seven official languages -Danish, Dutch, English, French, German, Greek and Italian. The number of possible translation directions, or language pairs, is 42 (see Figure 1 ).The Systran machine translation system now in use at the Commission in Luxembourg works in three of these language pairs -English into French, English into Italian, and French into English -and is used in the French, Italian and English Divisions for part of their work in those language pairs. For all the other divisions, and all other language pairs, human translation is the only option. So there is no question of imposing universal machine translation at the Commission -Systran can only cope with part of our work.Tools for the Trade. V. Lawson (ed.) . © Aslib and Elizabeth Wagner.In the English Division in Luxembourg, Systran was first introduced for French-to-English translation in 1981, and has been used in various pilot schemes, so we are now well acquainted with the system and its uses and limitations. In 1982 the translation workload of the English Division was composed of the following: translation from French 45 per cent, German 31 per cent, Dutch 9 per cent, Italian 8 per cent, Danish 4 per cent and Greek and other languages 3 per cent (see Figure 2 ). You will see that translation from French into English accounts for the largest share of our work; at the Commission in Brussels, and in other Community institutions, the percentage of French-to-English is even higher. That is of course the reason why the Commission chose to develop Systran in this language pair. Of the French translated in the English Division in 1982, 11 per cent was translated by Systran with either full or rapid post-editing; this is equivalent to 5 per cent of our total translation workload.From now on I shall be referring specifically to the English Translation Division in Luxembourg, since this is the one where rapid post-editing has been most extensively developed. As yet there has been very little demand for rapid post-editing in the other divisions using Systran (French and Italian).The introduction of Systran to the English Division was very skilfully handled to minimise translators' resistance and allay their fears. Throughout the various experiments conducted, the stress has been on voluntary participation, and different types of assessment forms have been prepared to record translators' reactions and comments. Our colleagues on the Systran Development Team, who are themselves professional translators, have always been extremely helpful and patient in dealing with translators' criticisms and feedback. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
495
0.026263
null
null
null
null
null
null
null
null
54f817386f1aeb30d7e4543a15ab9e1fe7bc1e9b
237295798
null
The tools for the job: an overview
is just fifty-two days away from 1984. If you have read George Orwell's famous novel, then you will know that he predicted the invasion of video screens which would monitor everything you do and say, in order to ensure that everyone was loyal to the State. If he walked round offices and homes today he could be forgiven for believing that his prediction, made back in 1949, had already come true. But he wasn't far from the truth, was he? If a video screen is not attached to every product, then a microchip is certain to be incorporated. Even filing systems use microprocessors today, so that at the touch of a button the document you require, one out of thousands, appears in front of you without you having to search for it. Technological development is marvellous, if used for everyone's benefit. But I wonder how many of you will believe that the developments in speech recognition and speech synthesis are beneficial to you. At the Telecoms 83 exhibition held in Geneva two weeks ago, the Japanese company, NEC, showed off its world leadership in speech technology by demonstrating a research model of an automatic interpreting system. A conversation was held in Japanese and English, and another in English and Spanish; both were taking place as if the language barrier just didn't exist. At the moment only around 150 words are utilised, but it is not simply word recognition: it is continuous speech recognition with sentences being composed which are almost grammatically correct. NEC is also researching a speaker-independent system which can recognise words spoken by a variety of people, with the aim of producing an operational automatic interpreting system by the turn of the century. Tools for the Trade, V. Lawson (ed.
{ "name": [ "Harriett, Julie" ], "affiliation": [ null ] }
null
null
Proceedings of Translating and the Computer 5: Tools for the trade
1983-11-01
0
0
null
Today, 10 November 1983, is just fifty-two days away from 1984. If you have read George Orwell's famous novel, then you will know that he predicted the invasion of video screens which would monitor everything you do and say, in order to ensure that everyone was loyal to the State. If he walked round offices and homes today he could be forgiven for believing that his prediction, made back in 1949, had already come true. But he wasn't far from the truth, was he? If a video screen is not attached to every product, then a microchip is certain to be incorporated. Even filing systems use microprocessors today, so that at the touch of a button the document you require, one out of thousands, appears in front of you without you having to search for it.Technological development is marvellous, if used for everyone's benefit. But I wonder how many of you will believe that the developments in speech recognition and speech synthesis are beneficial to you. At the Telecoms 83 exhibition held in Geneva two weeks ago, the Japanese company, NEC, showed off its world leadership in speech technology by demonstrating a research model of an automatic interpreting system. A conversation was held in Japanese and English, and another in English and Spanish; both were taking place as if the language barrier just didn't exist. At the moment only around 150 words are utilised, but it is not simply word recognition: it is continuous speech recognition with sentences being composed which are almost grammatically correct. NEC is also researching a speaker-independent system which can recognise words spoken by a variety of people, with the aim of producing an operational automatic interpreting system by the turn of the century.Tools for the Trade, V. Lawson (ed.) . © Aslib and Julie Harnett.But do not worry too much yet: your jobs are safe for the present. The electronic systems actually available now will simply help to make life in the translator's office a little easier and perhaps more productive.For instance, dictating systems are not normally considered to be the most exciting thing when it comes to technological advance. Nevertheless, even they have not managed to escape the microchip invasion. There are of course many hand-held machines, available at reasonable prices, into which you can record your information for later transcription by a typist; these are ideal for the small office with only two or three staff. For larger offices a centralised system would be more appropriate, such as the Nucleus system recently launched by Dictaphone. It uses a small word processing unit of its own for automatic dictation and transcription, work measurement and analysis, job tracking and word-processing supervisor's information. Staff can dictate into the system from anywhere in the world.The new centralised system from Philips is also advanced, incorporating a dictation management system which constantly monitors up to twenty dictation units for information on incoming dictation, which is then stored on disk for later analysis. It also has a data-interrogation and remote operation capability which provides the supervisor with greater control: transcription work can be distributed to typists evenly, according to their language specialisation; and regardless of how many cassettes are in use, the supervisor will be able to track their route through the system.Naturally, a typist needs the appropriate typewriter for the language being used. Companies such as Daro Robotron specialise in such keyboards, supplying over 140 from Arabic to Albanian, Bengali to Brazilian, Hebrew to Hungarian. If any language is used constantly, then a typewriter dedicated to that language would obviously be required.Recently, however, Olympia, well known for being first in the electronic typewriter field, developed the Eurotronic in co-operation with the European Community in Brussels. It is a universal machine for embassies, translation agencies and any company conducting business throughout the EEC. The user can now type on his usual keyboard layout and yet communicate effortlessly with all countries in the EEC; for at a touch of a switch, accents and dead keys for eleven European languages are available from one typewriter.Screens can be added to most electronic typewriters today, turning them into economical word processors -an excellent way to enter the field of automation. A translation bureau would find the ETAP word processor a very useful system to install. It, too, has a multilanguage keyboard, with a full range of characters for typing in English, French and German.All specified accents and characters exist on the keyboard, and will therefore be reproduced on the screen. It is not necessary to change character sets when typing in another language, since all characters are available in one set. The daisy-wheel printing element also contains those different characters and accents, so that printing out in different languages no longer involves interruptions. The continual retyping of successive drafts is also unnecessary on the ETAP system. Only amendments need to be retyped, and when information is inserted or deleted, the text which follows is automatically adjusted. You can handle column work, editing within columns and leaving subsequent ones intact. The ETAP has a full-page screen, which is useful not only for complicated layouts but for showing exactly what a page will look like before it is finally printed out. Scientific, partly Greek symbols are also included in the system. Taking the word processing capability a step further is the computer-based 8010 professional information system just launched by Rank Xerox. It lets the user create documents and send electronic mail in nine of the world's languages: English, Russian, Japanese, German, French, Spanish, Chinese, Italian and Portuguese. For the first time it is possible to create documents that freely intermix text in any combination of those languages, using a single device. These significant advances in language processing should appeal to multinational corporations, government agencies and educational institutions in many markets. The basic workstation currently supports German, French, Spanish, Italian, Portuguese and Scandinavian languages in addition to English. The Russian alphabet will be added soon and Japanese will be made available as an add-on software option, with Chinese availability scheduled for early 1984. Arabic, Korean and Hebrew capabilities are currently under development. The system combines computing, text editing, graphics, forms creation, records processing, terminal emulation and so on. It operates on the Ethernet local communications network, and high-resolution laser printers reproduce all text and graphics. Electronic mail can be sent or received over the network in any of the languages, at speeds of up to 500 pages per second. To type a foreign language, for example Russian, the user can command the system to display a picture of a Russian keyboard on the screen. As long as it appears, any typing will produce Russian characters. Japanese, considered to be the most difficult of all written languages, uses a mixture of 169 phonetic symbols called Kana plus 6,349 Chinese-style characters called Kanji. For typing Japanese, special software lets the user first type the sound of the word using Kana and then touch a command key which instructs the system to look up the word in its online dictionary of 110,000 words in Kanji spelling... technology at its most interesting.Ordinary computer and word processing printers too are being developed with the international market in view. The Newbury Data range, for example, includes a new dot matrix machine which offers eight resident character sets and has been specially designed to compete favourably in the European market-place.Already available in Europe, but launched only two days ago in the UK by Bytech, was a Wenger Datentechnik printer which incorporates fifteen character sets: the European languages, Japanese, Hebrew, Greek and Russian Cyrillic. Greek maths symbols are also included. An advance in printer technology, this brand new machine can produce a mix of up to six different languages on one line, so you could perhaps produce columns of text in different languages all automatically printed on the same page. Another useful feature of this printer is that you need only transmit a letter once and it will reproduce it as many times as required. (Normally the information would have to be transmitted from the word processor each time.) In Spring 1984 this printer will also be available in colour.One of the problems often encountered with computers or word processors is that the screen is either in landscape form, suitable for conventional data processing tasks, or in portrait form, for word processing applications. Now, however, Facit Data Products has unveiled a radically new concept in terminal technology called the 'Twist'. It has a large dual-display format and separate 'super-slim' keyboard. The monitor integration allows it to be tilted, lifted, and even twisted from landscape to portrait format while in use, so that the screen format can always be suited to the work being performed.Another interesting advance in office technology is the IMP range of personal workstations developed by Office Technology Ltd. These integrate voice, data, text and graphics with electronic filing and electronic mail. An interesting application for the translator would be to check that the typist has typed the text correctly and, should there be, say, a spelling mistake, pick up the integral telephone handset and put a voice message into the system giving the correct spelling. There will be numerous other applications for this, of course, and it could show the way to the combined dictating and word processing systems of tomorrow.Yet another advance, this time a little more pertinent to the translator of the future, is the Logos automatic translator which you will hear about in detail later. I first saw this software wonder at the Hanover Fair in April, where it was being demonstrated on a Wang Office Information System, translating between German and English. Basically, the Logos system allows the translation of over 20,000 words of technical or commercial text in a 24-hour period. Timeconsuming routine work is done by the system, leaving questions requiring language competence, judgement and creativity to the human translator. But because of the system's sensitivity to context, the result is claimed to be automated translation superior to any produced in the past.Technological developments are becoming more and more incredible. One such development is the optical disk. Filing documents electronically on optical disks means that you can store any kind of information in its original form, whether it be a drawing or a document. The system that was launched by Toshiba three weeks ago in the UK provides a storage capacity of 10,000 pages per side of the disk, allowing great space savings in offices. Think of it... no more stacks of paper, no more unwieldy filing systems which take hours to search through for one document. With electronic filing you can access any desired document from thousands within 10 seconds. Reading and writing of the picture image is done by laser beam. The only drawback is that, while information can be stored and deleted, the disk cannot as yet be amended. Updatable disks are under development, however, and meanwhile optical disks are an ideal storage method for information which is constant and will not need amending.Naturally you do not simply translate information and store it. Usually the finished document is needed elsewhere, and distributing it in an efficient way is essential. One very efficient modern method is via facsimile machines, such as the latest one from Muirhead, which will transmit a typical A4 document anywhere in the world over normal telephone lines in just 30 seconds. In the last couple of weeks Muirhead announced the introduction of an add-on security device for their Mufax 7800 Group 3 digital fax equipment to provide a high degree of security against possible leakage of confidential information. The device automatically enciphers the facsimile signals before transmission, thus making it virtually impossible for a rogue receiver to decipher these signals without a similarly programmed device at the remote end. It will be most suitable for military, government and other high-security applications, as well as for commercial uses by banks and oil companies where the need for highsecurity protection of transmitted data is vital.The telex network has never been ideal for communicating information for foreign language use, but the new super telex development called Teletex (not to be confused with teletext information services) is ideal. The first live demonstration in the UK of this new service was given by Triumph Adler at the recent International Business Show. It allowed an A4 document to be sent over standard telephone lines in less than 10 seconds. Teletex is not only fast, but cheap -less than the cost of a first-class letter -and, unlike facsimile or telex, produces letter-quality copy. But systems are yet to be installed in any offices in the UK, although there are already a thousand installed in Germany.Even with Teletex on the way, advances have been made in telex preparation and telex terminals. Indeed, it is now possible to send telexes via normal electronic typewriters, saving companies quite a lot of money in terminal costs. Where telex traffic is heavy, however, a dedicated system is bound to be better. A new microcomputer-based teleprinter was launched by Philips last week at the Telecoms 83 exhibition in Geneva. The Pact 250, as it is called, is a desk-top machine whose standard features include a 50,000-character electronic memory, powerful but simple message-end functions and a high-resolution bi-directional printer suitable for most scripts, including Cyrillic, Greek, Arabic, Farsi and Thai as well as all the Roman-based alphabets. Additional options include a visual display unit for on-screen editing of messages. Later versions will be produced for Teletex use.In large translation offices information often has to be passed between people until the work is finalised. The way to allow this communication to be undertaken easily is for the electronic equipment to be linked together so that one word processor, for instance, can 'talk' to another. This is done by installing what is called a local area network. It also allows word processors to share expensive resources (such as a printer). Often an office finds the need to link up various pieces of kit, only to find eventually that the wiring is uncontrollable and consequently not all the systems are linked effectively. If you install a suitable network, however, you will gain a large measure of versatility which will allow the layout to keep pace with expanding office requirements.One relatively low-cost system is the Clearway from Real Time Developments. When, for instance, an expensive new multilingual printer is installed, it can easily be hooked on to the network and made available to staff working at any of the keyboards linked into it. With a network, users can share all information stored in the system, passing it around, filing it, amending it, with recourse to nothing more elaborate than their own workstation.I could go on for hours discussing the new electronic equipment available for use in the translator's office. Time precludes that, but nevertheless I hope I have set the scene for today's conference. This overview at least will indicate that there is plenty of equipment available today to make your work more efficient and perhaps more productive. It may be frightening to contemplate the future with its automatic translation systems, but if these are viewed as an aid rather than as something to be feared, the next few years will be very interesting. The world is becoming more conscious of the need for good communications, and the systems being developed will make achieving that goal a little easier and much faster.
null
null
null
null
Main paper: : Today, 10 November 1983, is just fifty-two days away from 1984. If you have read George Orwell's famous novel, then you will know that he predicted the invasion of video screens which would monitor everything you do and say, in order to ensure that everyone was loyal to the State. If he walked round offices and homes today he could be forgiven for believing that his prediction, made back in 1949, had already come true. But he wasn't far from the truth, was he? If a video screen is not attached to every product, then a microchip is certain to be incorporated. Even filing systems use microprocessors today, so that at the touch of a button the document you require, one out of thousands, appears in front of you without you having to search for it.Technological development is marvellous, if used for everyone's benefit. But I wonder how many of you will believe that the developments in speech recognition and speech synthesis are beneficial to you. At the Telecoms 83 exhibition held in Geneva two weeks ago, the Japanese company, NEC, showed off its world leadership in speech technology by demonstrating a research model of an automatic interpreting system. A conversation was held in Japanese and English, and another in English and Spanish; both were taking place as if the language barrier just didn't exist. At the moment only around 150 words are utilised, but it is not simply word recognition: it is continuous speech recognition with sentences being composed which are almost grammatically correct. NEC is also researching a speaker-independent system which can recognise words spoken by a variety of people, with the aim of producing an operational automatic interpreting system by the turn of the century.Tools for the Trade, V. Lawson (ed.) . © Aslib and Julie Harnett.But do not worry too much yet: your jobs are safe for the present. The electronic systems actually available now will simply help to make life in the translator's office a little easier and perhaps more productive.For instance, dictating systems are not normally considered to be the most exciting thing when it comes to technological advance. Nevertheless, even they have not managed to escape the microchip invasion. There are of course many hand-held machines, available at reasonable prices, into which you can record your information for later transcription by a typist; these are ideal for the small office with only two or three staff. For larger offices a centralised system would be more appropriate, such as the Nucleus system recently launched by Dictaphone. It uses a small word processing unit of its own for automatic dictation and transcription, work measurement and analysis, job tracking and word-processing supervisor's information. Staff can dictate into the system from anywhere in the world.The new centralised system from Philips is also advanced, incorporating a dictation management system which constantly monitors up to twenty dictation units for information on incoming dictation, which is then stored on disk for later analysis. It also has a data-interrogation and remote operation capability which provides the supervisor with greater control: transcription work can be distributed to typists evenly, according to their language specialisation; and regardless of how many cassettes are in use, the supervisor will be able to track their route through the system.Naturally, a typist needs the appropriate typewriter for the language being used. Companies such as Daro Robotron specialise in such keyboards, supplying over 140 from Arabic to Albanian, Bengali to Brazilian, Hebrew to Hungarian. If any language is used constantly, then a typewriter dedicated to that language would obviously be required.Recently, however, Olympia, well known for being first in the electronic typewriter field, developed the Eurotronic in co-operation with the European Community in Brussels. It is a universal machine for embassies, translation agencies and any company conducting business throughout the EEC. The user can now type on his usual keyboard layout and yet communicate effortlessly with all countries in the EEC; for at a touch of a switch, accents and dead keys for eleven European languages are available from one typewriter.Screens can be added to most electronic typewriters today, turning them into economical word processors -an excellent way to enter the field of automation. A translation bureau would find the ETAP word processor a very useful system to install. It, too, has a multilanguage keyboard, with a full range of characters for typing in English, French and German.All specified accents and characters exist on the keyboard, and will therefore be reproduced on the screen. It is not necessary to change character sets when typing in another language, since all characters are available in one set. The daisy-wheel printing element also contains those different characters and accents, so that printing out in different languages no longer involves interruptions. The continual retyping of successive drafts is also unnecessary on the ETAP system. Only amendments need to be retyped, and when information is inserted or deleted, the text which follows is automatically adjusted. You can handle column work, editing within columns and leaving subsequent ones intact. The ETAP has a full-page screen, which is useful not only for complicated layouts but for showing exactly what a page will look like before it is finally printed out. Scientific, partly Greek symbols are also included in the system. Taking the word processing capability a step further is the computer-based 8010 professional information system just launched by Rank Xerox. It lets the user create documents and send electronic mail in nine of the world's languages: English, Russian, Japanese, German, French, Spanish, Chinese, Italian and Portuguese. For the first time it is possible to create documents that freely intermix text in any combination of those languages, using a single device. These significant advances in language processing should appeal to multinational corporations, government agencies and educational institutions in many markets. The basic workstation currently supports German, French, Spanish, Italian, Portuguese and Scandinavian languages in addition to English. The Russian alphabet will be added soon and Japanese will be made available as an add-on software option, with Chinese availability scheduled for early 1984. Arabic, Korean and Hebrew capabilities are currently under development. The system combines computing, text editing, graphics, forms creation, records processing, terminal emulation and so on. It operates on the Ethernet local communications network, and high-resolution laser printers reproduce all text and graphics. Electronic mail can be sent or received over the network in any of the languages, at speeds of up to 500 pages per second. To type a foreign language, for example Russian, the user can command the system to display a picture of a Russian keyboard on the screen. As long as it appears, any typing will produce Russian characters. Japanese, considered to be the most difficult of all written languages, uses a mixture of 169 phonetic symbols called Kana plus 6,349 Chinese-style characters called Kanji. For typing Japanese, special software lets the user first type the sound of the word using Kana and then touch a command key which instructs the system to look up the word in its online dictionary of 110,000 words in Kanji spelling... technology at its most interesting.Ordinary computer and word processing printers too are being developed with the international market in view. The Newbury Data range, for example, includes a new dot matrix machine which offers eight resident character sets and has been specially designed to compete favourably in the European market-place.Already available in Europe, but launched only two days ago in the UK by Bytech, was a Wenger Datentechnik printer which incorporates fifteen character sets: the European languages, Japanese, Hebrew, Greek and Russian Cyrillic. Greek maths symbols are also included. An advance in printer technology, this brand new machine can produce a mix of up to six different languages on one line, so you could perhaps produce columns of text in different languages all automatically printed on the same page. Another useful feature of this printer is that you need only transmit a letter once and it will reproduce it as many times as required. (Normally the information would have to be transmitted from the word processor each time.) In Spring 1984 this printer will also be available in colour.One of the problems often encountered with computers or word processors is that the screen is either in landscape form, suitable for conventional data processing tasks, or in portrait form, for word processing applications. Now, however, Facit Data Products has unveiled a radically new concept in terminal technology called the 'Twist'. It has a large dual-display format and separate 'super-slim' keyboard. The monitor integration allows it to be tilted, lifted, and even twisted from landscape to portrait format while in use, so that the screen format can always be suited to the work being performed.Another interesting advance in office technology is the IMP range of personal workstations developed by Office Technology Ltd. These integrate voice, data, text and graphics with electronic filing and electronic mail. An interesting application for the translator would be to check that the typist has typed the text correctly and, should there be, say, a spelling mistake, pick up the integral telephone handset and put a voice message into the system giving the correct spelling. There will be numerous other applications for this, of course, and it could show the way to the combined dictating and word processing systems of tomorrow.Yet another advance, this time a little more pertinent to the translator of the future, is the Logos automatic translator which you will hear about in detail later. I first saw this software wonder at the Hanover Fair in April, where it was being demonstrated on a Wang Office Information System, translating between German and English. Basically, the Logos system allows the translation of over 20,000 words of technical or commercial text in a 24-hour period. Timeconsuming routine work is done by the system, leaving questions requiring language competence, judgement and creativity to the human translator. But because of the system's sensitivity to context, the result is claimed to be automated translation superior to any produced in the past.Technological developments are becoming more and more incredible. One such development is the optical disk. Filing documents electronically on optical disks means that you can store any kind of information in its original form, whether it be a drawing or a document. The system that was launched by Toshiba three weeks ago in the UK provides a storage capacity of 10,000 pages per side of the disk, allowing great space savings in offices. Think of it... no more stacks of paper, no more unwieldy filing systems which take hours to search through for one document. With electronic filing you can access any desired document from thousands within 10 seconds. Reading and writing of the picture image is done by laser beam. The only drawback is that, while information can be stored and deleted, the disk cannot as yet be amended. Updatable disks are under development, however, and meanwhile optical disks are an ideal storage method for information which is constant and will not need amending.Naturally you do not simply translate information and store it. Usually the finished document is needed elsewhere, and distributing it in an efficient way is essential. One very efficient modern method is via facsimile machines, such as the latest one from Muirhead, which will transmit a typical A4 document anywhere in the world over normal telephone lines in just 30 seconds. In the last couple of weeks Muirhead announced the introduction of an add-on security device for their Mufax 7800 Group 3 digital fax equipment to provide a high degree of security against possible leakage of confidential information. The device automatically enciphers the facsimile signals before transmission, thus making it virtually impossible for a rogue receiver to decipher these signals without a similarly programmed device at the remote end. It will be most suitable for military, government and other high-security applications, as well as for commercial uses by banks and oil companies where the need for highsecurity protection of transmitted data is vital.The telex network has never been ideal for communicating information for foreign language use, but the new super telex development called Teletex (not to be confused with teletext information services) is ideal. The first live demonstration in the UK of this new service was given by Triumph Adler at the recent International Business Show. It allowed an A4 document to be sent over standard telephone lines in less than 10 seconds. Teletex is not only fast, but cheap -less than the cost of a first-class letter -and, unlike facsimile or telex, produces letter-quality copy. But systems are yet to be installed in any offices in the UK, although there are already a thousand installed in Germany.Even with Teletex on the way, advances have been made in telex preparation and telex terminals. Indeed, it is now possible to send telexes via normal electronic typewriters, saving companies quite a lot of money in terminal costs. Where telex traffic is heavy, however, a dedicated system is bound to be better. A new microcomputer-based teleprinter was launched by Philips last week at the Telecoms 83 exhibition in Geneva. The Pact 250, as it is called, is a desk-top machine whose standard features include a 50,000-character electronic memory, powerful but simple message-end functions and a high-resolution bi-directional printer suitable for most scripts, including Cyrillic, Greek, Arabic, Farsi and Thai as well as all the Roman-based alphabets. Additional options include a visual display unit for on-screen editing of messages. Later versions will be produced for Teletex use.In large translation offices information often has to be passed between people until the work is finalised. The way to allow this communication to be undertaken easily is for the electronic equipment to be linked together so that one word processor, for instance, can 'talk' to another. This is done by installing what is called a local area network. It also allows word processors to share expensive resources (such as a printer). Often an office finds the need to link up various pieces of kit, only to find eventually that the wiring is uncontrollable and consequently not all the systems are linked effectively. If you install a suitable network, however, you will gain a large measure of versatility which will allow the layout to keep pace with expanding office requirements.One relatively low-cost system is the Clearway from Real Time Developments. When, for instance, an expensive new multilingual printer is installed, it can easily be hooked on to the network and made available to staff working at any of the keyboards linked into it. With a network, users can share all information stored in the system, passing it around, filing it, amending it, with recourse to nothing more elaborate than their own workstation.I could go on for hours discussing the new electronic equipment available for use in the translator's office. Time precludes that, but nevertheless I hope I have set the scene for today's conference. This overview at least will indicate that there is plenty of equipment available today to make your work more efficient and perhaps more productive. It may be frightening to contemplate the future with its automatic translation systems, but if these are viewed as an aid rather than as something to be feared, the next few years will be very interesting. The world is becoming more conscious of the need for good communications, and the systems being developed will make achieving that goal a little easier and much faster. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
495
0
null
null
null
null
null
null
null
null
16ca6d1612ea65476a1ca27930ee8a1da8f18280
237295817
null
New developments in {TITUS} 4
The TITUS 4 system was originally designed to produce abstracts in the form of sentences or phrases written in controlled syntax. It is now being improved, partly to give the user more flexibility in writing sentences, and partly so that the system can be implemented in other fields than abstracting services. Improvements being introduced to enhance TITUS 4's versatility include multiple-clause sentences. Certain restrictions, however, remain owing to linguistic problems associated with translation from one language to another.
{ "name": [ "Streiff, A. A." ], "affiliation": [ null ] }
null
null
Proceedings of Translating and the Computer 5: Tools for the trade
1983-11-01
0
5
null
The TITUS 4 system was designed originally to receive abstracts in the form of sentences or phrases specially encoded for input to a computer in any one of four languages (English, French, German and Spanish).The computer stores this input in suitable form and, on command, outputs the abstracts as sentences or phrases in one or more of the four languages specified in the command. The TITUS 4 system enables abstracts to be written in sentences or phrases which closely resemble free-language sentences or phrases. Certain limitations or restrictions are, however, imposed by the computer program and by linguistic problems associated with translation from one language to another. The controlled syntax language used by the TITUS 4 system comprises two basic elements (Figure 1 ).A sub-set of the whole vocabulary of a language containing all terms specific to a given field and a part of the basic vocabulary which is common to all fields.2. A sub-set of all syntactic rules of a language. Though few in number, the syntactic rules of the TITUS 4 system allow the user to formulate a large variety of ideas.GENERAL FLOW DIAGRAM OF TITUS 4 ( Figure 2 ) Document records to be introduced in the computer-held database must be encoded according to the specific TITUS 4 recording rules. All sentences of a record should match the syntactic rules recognised by the system and only certain terms picked out of the TITUS 4 pre-established vocabulary.Records, whether written in English, French, German or Spanish, are interactively introduced into the computer by means of screen-type terminals.Right use of recording rules is controlled, and lexical and syntactic validity is checked by 'generative grammars'. Any error or ambiguity arising during the input phase is detected by the program and causes the display of an error message or a question (polysemy, homography) requiring a statement from the user.Each sentence is transformed into 'swivel language' before its storage. It is a binary form language whose particular structure permits very quick translation of a sentence in one or more of the four languages. After a sentence has been validated, 'generative grammars' index it automatically by detecting the descriptors contained in it. During the output phase, any document record stored in 'swivel language' is processed by 'output transformational grammars' which effect automatic translation.Sentences and phrases must consist only of authorised words. The authorised lexical elements are either 'fixed elements' or 'variable elements'.
null
null
The variable elements listed alphabetically in the lexicon comprise nouns, adjectives, verbs and adverbs.These may be single-word concepts or pre-coordinated multiple-word concepts. No lexical element can exceed 48 characters in length, including spaces.These may be of three types:(i)simple adjectives, which may be single-word or multiple-word adjectives (ii) adjectives with complementation (iii) past-participles formed from verbs in the lexicon, used adjectivally.The verbs in the lexicon are listed in their infinitive form. Most of the tenses and verbal elements (participles) are derived from the infinitive.Common adverbs of frequency and manner are included in the lexicon.The main development in TITUS 4 since its implementation in 1980 is the introduction of subclauses (subordinate clauses, relative clauses and 'that' clauses) along with pronouns, giving the system more flexibility in writing sentences. The basic pattern of a sentence or a clause (Figure 3 ) as established for TITUS 4, remains unchanged and the same writing rule will apply to any clause of the whole sentence.A sentence may contain up to four clauses, the first being always the main clause.clauses are introduced by subordinating conjunctions (although, if, when, since, because...).Only those clauses which are the object of a verb in another clause are taken into account by the system.The author states that this method may be improved.
The fixed elements included in the program comprise:-determiners (articles, demonstratives, possessives, quantifiers, cardinals...) -conjunctions -prepositions -adverbs of degree -negatives -auxiliary verbs and modals.Relative clauses are introduced by the relative pronouns (who, which and that). These clauses may or may not be included in another clause (main clause or other subclause).To avoid any confusion between some subordinating conjunctions (before, after, since, etc.) and prepositions, diacritical marks are used to indicate to the program the beginning of a subordinate clause. Relative pronouns and personal pronouns as well are considered as fixed elements and included in the programs. Because the system is interactive, antecedents of pronouns (either relative or personal) can be determined in the same way as is done for prepositions, by the use of diacritical marks.A question/answer procedure is being set up to determine the antecedent of a pronoun whose antecedent is in a preceding clause. Another innovation in the near future will be the use of the imperative form of verbs, this form being generally used in command languages of users' manuals and similar documents.Any user of the TITUS 4 system must comply with the writing rules imposed by the controlled syntax, and the training time to become familiar with it has been estimated at five to six days (full-time).It will be easily understood that, since the TITUS 4 programs are interactive (question/answer procedure), the input of sentences or texts should be done by people who have themselves written them according to the controlled syntax and the corresponding vocabulary.One of the advantages frequently cited is the absence of cumbersome phrases or verbosity, thus much improving the clarity of the texts.Any document or text entered can be interactively corrected after having been checked by the user himself from the four-language control listing edited after input has been completed.We do not yet possess sufficiently sophisticated computers endowed with artificial intelligence (if indeed we ever shall) capable of interpreting all the intellectual subtleties and shades implicit in written expression in different languages.For this reason TITUS 4 was based on a veritable 'controlled-syntax language' which can be processed by computer, but which is a simplified and formalised synthesis of natural language.We hope TITUS 4 will not be restricted to abstracting and information services publishing multilingual bibliographic periodicals. The TITUS 4 system may be a useful and reliable tool for export-market-oriented industry, one of whose primary needs is to publish multilingual technical brochures, leaflets and notices and whose translation costs are nowadays of paramount importance. Another application of TITUS 4 could be its implementation in more sophisticated systems which automatically scan free language texts and pick out all the sentences matching the models defined by the controlled syntax.
Main paper: fixed elements: The fixed elements included in the program comprise:-determiners (articles, demonstratives, possessives, quantifiers, cardinals...) -conjunctions -prepositions -adverbs of degree -negatives -auxiliary verbs and modals. variable elements: The variable elements listed alphabetically in the lexicon comprise nouns, adjectives, verbs and adverbs.These may be single-word concepts or pre-coordinated multiple-word concepts. No lexical element can exceed 48 characters in length, including spaces.These may be of three types:(i)simple adjectives, which may be single-word or multiple-word adjectives (ii) adjectives with complementation (iii) past-participles formed from verbs in the lexicon, used adjectivally.The verbs in the lexicon are listed in their infinitive form. Most of the tenses and verbal elements (participles) are derived from the infinitive.Common adverbs of frequency and manner are included in the lexicon.The main development in TITUS 4 since its implementation in 1980 is the introduction of subclauses (subordinate clauses, relative clauses and 'that' clauses) along with pronouns, giving the system more flexibility in writing sentences. The basic pattern of a sentence or a clause (Figure 3 ) as established for TITUS 4, remains unchanged and the same writing rule will apply to any clause of the whole sentence.A sentence may contain up to four clauses, the first being always the main clause.clauses are introduced by subordinating conjunctions (although, if, when, since, because...).Only those clauses which are the object of a verb in another clause are taken into account by the system.The author states that this method may be improved. relative clauses: Relative clauses are introduced by the relative pronouns (who, which and that). These clauses may or may not be included in another clause (main clause or other subclause).To avoid any confusion between some subordinating conjunctions (before, after, since, etc.) and prepositions, diacritical marks are used to indicate to the program the beginning of a subordinate clause. Relative pronouns and personal pronouns as well are considered as fixed elements and included in the programs. Because the system is interactive, antecedents of pronouns (either relative or personal) can be determined in the same way as is done for prepositions, by the use of diacritical marks.A question/answer procedure is being set up to determine the antecedent of a pronoun whose antecedent is in a preceding clause. Another innovation in the near future will be the use of the imperative form of verbs, this form being generally used in command languages of users' manuals and similar documents.Any user of the TITUS 4 system must comply with the writing rules imposed by the controlled syntax, and the training time to become familiar with it has been estimated at five to six days (full-time).It will be easily understood that, since the TITUS 4 programs are interactive (question/answer procedure), the input of sentences or texts should be done by people who have themselves written them according to the controlled syntax and the corresponding vocabulary.One of the advantages frequently cited is the absence of cumbersome phrases or verbosity, thus much improving the clarity of the texts.Any document or text entered can be interactively corrected after having been checked by the user himself from the four-language control listing edited after input has been completed.We do not yet possess sufficiently sophisticated computers endowed with artificial intelligence (if indeed we ever shall) capable of interpreting all the intellectual subtleties and shades implicit in written expression in different languages.For this reason TITUS 4 was based on a veritable 'controlled-syntax language' which can be processed by computer, but which is a simplified and formalised synthesis of natural language.We hope TITUS 4 will not be restricted to abstracting and information services publishing multilingual bibliographic periodicals. The TITUS 4 system may be a useful and reliable tool for export-market-oriented industry, one of whose primary needs is to publish multilingual technical brochures, leaflets and notices and whose translation costs are nowadays of paramount importance. Another application of TITUS 4 could be its implementation in more sophisticated systems which automatically scan free language texts and pick out all the sentences matching the models defined by the controlled syntax. introduction: The TITUS 4 system was designed originally to receive abstracts in the form of sentences or phrases specially encoded for input to a computer in any one of four languages (English, French, German and Spanish).The computer stores this input in suitable form and, on command, outputs the abstracts as sentences or phrases in one or more of the four languages specified in the command. The TITUS 4 system enables abstracts to be written in sentences or phrases which closely resemble free-language sentences or phrases. Certain limitations or restrictions are, however, imposed by the computer program and by linguistic problems associated with translation from one language to another. The controlled syntax language used by the TITUS 4 system comprises two basic elements (Figure 1 ).A sub-set of the whole vocabulary of a language containing all terms specific to a given field and a part of the basic vocabulary which is common to all fields.2. A sub-set of all syntactic rules of a language. Though few in number, the syntactic rules of the TITUS 4 system allow the user to formulate a large variety of ideas.GENERAL FLOW DIAGRAM OF TITUS 4 ( Figure 2 ) Document records to be introduced in the computer-held database must be encoded according to the specific TITUS 4 recording rules. All sentences of a record should match the syntactic rules recognised by the system and only certain terms picked out of the TITUS 4 pre-established vocabulary.Records, whether written in English, French, German or Spanish, are interactively introduced into the computer by means of screen-type terminals.Right use of recording rules is controlled, and lexical and syntactic validity is checked by 'generative grammars'. Any error or ambiguity arising during the input phase is detected by the program and causes the display of an error message or a question (polysemy, homography) requiring a statement from the user.Each sentence is transformed into 'swivel language' before its storage. It is a binary form language whose particular structure permits very quick translation of a sentence in one or more of the four languages. After a sentence has been validated, 'generative grammars' index it automatically by detecting the descriptors contained in it. During the output phase, any document record stored in 'swivel language' is processed by 'output transformational grammars' which effect automatic translation.Sentences and phrases must consist only of authorised words. The authorised lexical elements are either 'fixed elements' or 'variable elements'. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
495
0.010101
null
null
null
null
null
null
null
null
352bb63ae5832b146adabce16ff3ec5aa05b5953
237295779
null
Development of computers and communications: the role of the National Physical Laboratory
The paper traces the computer research programme at NPL, from the early development of computers, through various applications (including natural language processing), to computer networking. It then surveys some of the computer network services now generally available.
{ "name": [ "Price, Wyn L." ], "affiliation": [ null ] }
null
null
Proceedings of Translating and the Computer 5: Tools for the trade
1983-11-01
7
0
null
The National Physical Laboratory (NPL) is a research establishment of the Department of Trade and Industry. The Laboratory was founded in 1900, due largely to the influence of the British Association for the Advancement of Science and the Royal Society.The original remit of the Laboratory laid strong emphasis on the creation and maintenance of measurement standards, calibration and related physical research; activity in this area remains a major part of the work of NPL today, but the Laboratory has grown and its research has diversified well beyond its first objectives.A detailed history of the Laboratory by Pyatt (1) has been published recently.Since 1945 the Laboratory has played a leading part in the development of computers and communication involving computers. Work began in 1946 on an 'automatic computing engine', adopting a name originally used by Charles Babbage, 'father' of computing, which led to the building of the Pilot ACE computer. This began working in May 1950 and was one of the world's first computers; at that time it was also one of the fastest. Pilot ACE gave sterling service to the Mathematics Division at NPL for a number of years; its work included studies of road traffic control and aircraft Tools for theTrade, V.Lawson (ed.).© Aslib and Dr Wyn L. Price. design. When it was dismantled at the end of its service it was transferred for display at the Science Museum.Pilot ACE was succeeded by DEUCE, an engineered version of the same basic design by the English Electric Company. In turn this was itself replaced by the ACE computer, designed and built at NPL, a much more powerful machine which came into service in 1958 and was heavily used until it was closed down in 1967.The 'full-scale' ACE was the last of the series of machines designed and built at NPL. Today the Laboratory has a large number of computers, ranging from microprocessors to big 'number crunchers', which play a vital part in the general research work; these come from a wide range of manufacturers. The teams that built the early NPL machines, together with new staff joining over the years, turned their attention to applications of computer systems; these applications were generally 'non-mathematical', such as language translation and pattern recognition. At the same time the mathematical work expanded in the Mathematics Division.The machine translation research programme began in 1959 and aimed at translating Russian scientific texts (particularly in the fields of electrical engineering, mathematics and physics) into English, using the 'full-scale' ACE computer. The standard of translation aimed at was one that would be useful to a professional worker in the particular field; translations of literary quality were not sought. A dictionary of about 20,000 entries was created, first punched on cards and then written on magnetic tape. This dictionary was organised by Russian 'stem' forms, with descriptions of the inflections possible with each stem. Allowance was made for the systematic stem changes which occur in Russian. Each entry contained at least one English equivalent, with additional equivalents where alternatives were possible.The translation process commenced with a scan of the Russian text designed to split stems from suffixes. This enabled the words to be looked up in the tape dictionary. Also at this stage Russian word groups forming idiomatic phrases were identified and the corresponding dictionary entries found. Syntactic analysis programs were written which allowed much of the structure of the Russian texts to be discovered. A complete syntactic analysis was not aimed for. Other programs took the partially analysed text and created the corresponding English structures. Into these structures the English equivalent forms were fitted. The result was punched on cards for output from the computer.Finally the cards were tabulated on a card-operated typewriter. Where more than one English equivalent was possible, these were output as alternatives to be selected by the reader. Where no dictionary match was found, the Russian word was transliterated into 'English'; because of the large scale 'borrowing' of English words to make Russian technical terms, this transliteration often gave a perfectly readable 'word'.At the end of the machine translation research project, independent assessment of the product quality was sought. Scientists were invited to send in Russian texts for processing by the NPL translation system. The results were then sent back to the originators for evaluation against a nine-point scale which ranged from 'useless' to 'fully adequate'. Altogether thirty-four texts were subjected to this process and the average response corresponded to the description 'mostly very good -a few sentences obscure, so that something may be lost, but normally clear enough'. A description of the NPL machine translation project may be found in the paper by McDaniel et al. 2The machine translation project was followed by another project in the field of natural language processing. This was intended to take machine shorthand and convert this into readable English. The project was based on the Palantype system of mechanical shorthand which employs a machine with a 29-key keyboard; the keys are arranged in groups of left consonants, vowels and right consonants and the recording system follows fairly closely the syllabic structure of words, though some shorthand 'chords' cover more than one syllable. The normal output of the machine is a strip of printed paper from which a transcription is made by a human operator, a time-consuming operation which the NPL project aimed to obviate.The Palantype machine was adapted at NPL to permit data input to the KDF9 computer. The next task was to create a Palantype-to-English dictionary. A suitable word list was obtained and this was Palantyped and recorded on punched paper tape. From this the machine dictionary was built, taking into account the syllabic structure of the Palantype system. As with the Russian translation system, it was sometimes necessary to include more than one English equivalent; these arose because of homophones.In action the transcription system accepted a stream of Palantype chords and looked these up in the dictionary to find the English equivalents. Because word boundaries were rarely explicit in the incoming stream, the process of dictionary lookup had the additional task of trying to identify word boundaries; this was done on the principle of 'longest-match', which usually (but not always) gave the right result.The English output from the transcription system was printed on a fast character printer. Where dictionary lookup had failed to find a match, the process took the Palantype chords and transliterated these, often giving a readable equivalent, just as was done with the Russian translation. Given an error-free input, the transcription system performed very well, but fast operation of the keyboard at normal recording speeds resulted in a large number of input errors with which the system could not cope adequately. Before the project ended techniques were being developed which introduced a degree of error correction into the input text, leading to better-quality output. An account of the NPL Palantype transcription project may be found in Price. 3Though the NPL project ceased in 1970 after four years work, it is pleasant to be able to report that others have since built on this work (4, 5, 6), leading to useful applications in subtitling television programmes and in aids for the deaf. These later developments have made use of microprocessor systems, which were not available at the time of the NPL project.Both the Russian translation and the Palantype transcription systems depended on very large computers, and this diminished their practical application. The Russian translation system was essentially a batch process, whilst the Palantype system could operate either online or in batch mode. Batch processing was the norm for the vast majority of early computer applications, with jobs being submitted in the form of packs of program and data cards (or tape) to machine operators. Computers usually handled only one job at a time.As computer technology developed, so we saw the introduction of time-sharing, in which more than one task at a time was handled by a machine, each task getting its turn under the control of the computer operating system; thus the main processor of the machine might be occupied with one job, whilst a backing-store data transfer was going on for another job. With the possibility of running multiple tasks the need for presenting jobs in batch form became less insistent, and the computer users were able to prepare programs and submit jobs from terminals without directly involving the computer operators; results were often delivered to the user terminals. This development, with users becoming increasingly remote from the computers, led to exciting innovations in communication technology.The level of innovation increased enormously when communication systems were required to allow computers to communicate with each other and not just with remote terminals. The era of distributed processing has meant that some of the more complex computational tasks are now performed using computing capability located at more than one site, with co-operating computers and databases.A possible way to connect a number of user terminals to a computer mainframe is to provide each with a separate electrical connection, or line, and bring these lines into the computer centre, where a front-end processor has the task of handling them. Indeed, this is how the first terminal systems were connected. For a few terminals installed within a limited site the method is feasible. However, for a large number of terminals installed over a wide area the quantity and cost of dedicated hardware becomes frightening.It is therefore necessary to look for alternatives which will allow some sharing of systems hardware between terminals. One very familiar system in which many people share communication hardware is the public switched telephone network (or PSTN), which has been with us for many decades. For most of its lifetime this system has supported only voice communication, but within the last twenty-five years or so it has been increasingly used for data also. For this purpose data is converted into sound pulses at the transmitter and back into data waveforms at the receiver; the device which carries out this function is known as a 'modem' or modulator-demodulator.Unfortunately there are severe limitations on the data capacity of the traditional telephone network, which have restricted its usefulness in computer communications. Another important limitation arises from the time taken to set up calls in the PSTN. Where computers (or terminals) need to communicate sporadically with each other, using bursts of data, it is very inefficient to set up a fresh call every time a data burst must flow; the exchange and line allocations are held for the full duration of a PSTN call and will be underused for bursty data applications. A typical application of this kind is the modern automatic teller machine, seen outside banks, or the point-of-sale terminal which will soon be with us. On the other hand the switched system may well be able to handle large quanitities of data in a continuous flow, albeit at fairly low data rates. There is clearly a need for an alternative communication mode. Such an alternative system is now with us in a fullydeveloped form and its inception is largely due to another initiative from NPL. In 1965 a group under Davies developed the concept of 'packet-switching' as an alternative to the traditional 'circuit-switching' of the PSTN. The PSTN consists of a large number of exchanges joined by a network of lines. Dialling a call in a circuit-switched system causes a path to be found from source to destination user, with each exchange along the path taking its part in establishing the chosen route; the path is then reserved for the duration of the call and the data flows along the reserved path, line and exchange hardware being dedicated to this task.In contrast, a packet-switched system requires that data be segmented into units called 'packets'; each packet carries a header field in which are recorded such parameters as source and destination designation, packet number, etc. The packet-switched network consists of a number of nodes (small computers) joined by a network of lines. As a packet arrives at each node on its journey from source to destination, so the packet header is inspected and the node decides on which output line the packet shall leave. When the packet arrives at the destination node it is delivered to the user. Packet switching may take many forms; routing may be adaptive, taking account of changes in network topology or in loading levels, or it may be fixed, in which case packets between particular sources and destinations always take the same paths; flow control of packets may depend on end-to-end exchanges of control signals, or it may be based on node-to-node control or some combination of these procedures. Error detection and recovery is an important feature of these systems; again this may be based on end-to-end or node-to-node procedures.NPL developed the concept in the late 1960s and soon had its own on-site packet-switched network in operation. This provided access within the Laboratory to services and terminals; at first it was run on an experimental basis, but soon it came to be relied upon as an essential feature of the computing systems of the Laboratory. In addition to giving access to the central computing facilities at the Laboratory, the network gives access to other systems such as the word processing facilities, Scrapbook and Edit. The network at NPL is still in operation, the present system being a direct development from the first. We see thus that NPL had its own packet-switched local area network as much as fifteen years ago. A description of the NPL network may be found in Davies and Barber. 7At the same time packet-switching developments began in the United States with the network of the Advanced Research Projects Agency (ARPA) of the Department of Defense. This rapidly expanded from four nodes in California to sixty-four nodes covering the whole of the United States and beyond. Since that time many more networks have come into existence and carry an immense worldwide load of data traffic. Commerce and industry have come to rely very heavily on network systems.During the first years of development all that the different networks had in common were the basic concepts.Any intercommunication between networks had to be achieved by means of specially designed 'gateway' nodes. This was largely because there was no commonality of procedures or of data formats. At the gateways the protocols of one network were translated into those of the next. This was clearly a very undesirable state of affairs, because ad hoc gateways were required between all network pairs needing to communicate.The remedy for incompatible networks is to establish a set of standards to be followed by all network designers. Standards for networking began to emerge quite early. One of the first was British Standard 4421, which defined a 'digital input-output interface for use in data collection systems'. The most significant source of recommendations in the field of data communications has been the International Consultative Committee for Telephones and Telegraphs (CCITT), part of the International Telecommunication Union (UIT), itself an agency of the United Nations; the CCITT V series of recommendations is concerned with data communication via the telephone system; the CCITT X series of recommendations is concerned with the new data networks. Though going under the name of 'recommendations', the CCITT documents have almost the same status as formal standards; they are followed closely by the national PTT (Post, Telegraph and Telephone) organisations.The emphasis of the CCITT recommendations is towards the hardware of the networks, so that one finds recommendations for signal levels on lines or for methods of error correction on point-to-point connections; one does not find many recommendations for protocols for allowing meaningful and efficient communication between one application program and another.In order to achieve effective communication between computers or between terminals and computers, it is necessary to define a hierarchy of protocols. For example, there will be a protocol for data exchange between a terminal and the network node to which it is connected; there will be another protocol to establish reliable communication between source and destination hardware. Recently the International Standards Organisation through its Technical Committee 97, responsible for data communications, has created an architectural model, known as Open Systems Interconnection (OSI), whose function it is to provide an effective framework into which the individual standards can be slotted. The object of OSI is to allow inter-networking in the broadest sense. The architectural model has seven layers, ranging from the lowest (Physical) layer to the highest (Application). The Physical layer includes the mechanical and electrical characteristics of point-to-point connections, whilst the Application layer provides for interfacing to user application programs.Definition of standards at the various OSI layers is proceeding apace; broadly speaking the progress is bottom upwards, with many of the CCITT recommendations forming the basis of international standards.Though we may not be aware of the fact, many of us already use communication network facilities. Bank automatic teller machines rely upon network connections for reference to customers' accounts. Soon we shall see point-of-sale terminals appearing in the shops, so that 'plastic money' starts replacing cash for many transactions. These too will rely upon data networks to make the money transfers that are consequential upon purchases.The banks already rely on data networks for internal banking transactions. Soon we shall see the CHAPS network (Clearing Houses Automatic Payments System) coming into action for very large inter-bank money transfers within the City of London, whilst the SWIFT network has carried international inter-bank transactions for a number of years.Industry too relies heavily upon networking for communicating between various headquarters, offices and production plants.Office automation will be greatly facilitated by the Teletex service, offered by British Telecom and a number of other PTTs. This is a document handling system, defined by the CCITT, which allows sophisticated document formatting and transfer between users. Teletex, not to be confused with teletext, could be described roughly as a 'super-telex' service; indeed it is intended to work alongside the telex system and to inter-work with it.To the domestic user the teletext service (Ceefax and Oracle), available without additional charge on television, represents a primitive data information system; interfaces are now available between teletext and personal computers, so that programs and data may be loaded direct. At a more sophisticated level we have the Prestel system, also displayed on TV sets, but requiring a telephone connection; Prestel allows the user to operate in an interactive mode, so that, for example, goods may be ordered from advertisers. Already home banking is a possibility to a limited extent via Prestel, and may be expected to become more widely available in a short time.There is already considerable international interest in the concept of an Integrated Services Digital Network. British Telecom plans to introduce such a service in late 1983. (8) This network will provide customers with a variety of new services and facilities, many of which are made possible only by the increased bandwidth provided by a wholly digital connection.Some of the facilities and services we have mentioned use the older established media such as the telephone, but most of them would not be possible were it not for the development of the digital computer. It is truly astonishing that all this has come about within the thirty-eight years that have elapsed since the end of World War II. The pioneers of computing, imaginative people though they were, can have had little conception of the developments that would result from their work.
null
null
null
null
Main paper: introduction: The National Physical Laboratory (NPL) is a research establishment of the Department of Trade and Industry. The Laboratory was founded in 1900, due largely to the influence of the British Association for the Advancement of Science and the Royal Society.The original remit of the Laboratory laid strong emphasis on the creation and maintenance of measurement standards, calibration and related physical research; activity in this area remains a major part of the work of NPL today, but the Laboratory has grown and its research has diversified well beyond its first objectives.A detailed history of the Laboratory by Pyatt (1) has been published recently.Since 1945 the Laboratory has played a leading part in the development of computers and communication involving computers. Work began in 1946 on an 'automatic computing engine', adopting a name originally used by Charles Babbage, 'father' of computing, which led to the building of the Pilot ACE computer. This began working in May 1950 and was one of the world's first computers; at that time it was also one of the fastest. Pilot ACE gave sterling service to the Mathematics Division at NPL for a number of years; its work included studies of road traffic control and aircraft Tools for theTrade, V.Lawson (ed.).© Aslib and Dr Wyn L. Price. design. When it was dismantled at the end of its service it was transferred for display at the Science Museum.Pilot ACE was succeeded by DEUCE, an engineered version of the same basic design by the English Electric Company. In turn this was itself replaced by the ACE computer, designed and built at NPL, a much more powerful machine which came into service in 1958 and was heavily used until it was closed down in 1967.The 'full-scale' ACE was the last of the series of machines designed and built at NPL. Today the Laboratory has a large number of computers, ranging from microprocessors to big 'number crunchers', which play a vital part in the general research work; these come from a wide range of manufacturers. The teams that built the early NPL machines, together with new staff joining over the years, turned their attention to applications of computer systems; these applications were generally 'non-mathematical', such as language translation and pattern recognition. At the same time the mathematical work expanded in the Mathematics Division.The machine translation research programme began in 1959 and aimed at translating Russian scientific texts (particularly in the fields of electrical engineering, mathematics and physics) into English, using the 'full-scale' ACE computer. The standard of translation aimed at was one that would be useful to a professional worker in the particular field; translations of literary quality were not sought. A dictionary of about 20,000 entries was created, first punched on cards and then written on magnetic tape. This dictionary was organised by Russian 'stem' forms, with descriptions of the inflections possible with each stem. Allowance was made for the systematic stem changes which occur in Russian. Each entry contained at least one English equivalent, with additional equivalents where alternatives were possible.The translation process commenced with a scan of the Russian text designed to split stems from suffixes. This enabled the words to be looked up in the tape dictionary. Also at this stage Russian word groups forming idiomatic phrases were identified and the corresponding dictionary entries found. Syntactic analysis programs were written which allowed much of the structure of the Russian texts to be discovered. A complete syntactic analysis was not aimed for. Other programs took the partially analysed text and created the corresponding English structures. Into these structures the English equivalent forms were fitted. The result was punched on cards for output from the computer.Finally the cards were tabulated on a card-operated typewriter. Where more than one English equivalent was possible, these were output as alternatives to be selected by the reader. Where no dictionary match was found, the Russian word was transliterated into 'English'; because of the large scale 'borrowing' of English words to make Russian technical terms, this transliteration often gave a perfectly readable 'word'.At the end of the machine translation research project, independent assessment of the product quality was sought. Scientists were invited to send in Russian texts for processing by the NPL translation system. The results were then sent back to the originators for evaluation against a nine-point scale which ranged from 'useless' to 'fully adequate'. Altogether thirty-four texts were subjected to this process and the average response corresponded to the description 'mostly very good -a few sentences obscure, so that something may be lost, but normally clear enough'. A description of the NPL machine translation project may be found in the paper by McDaniel et al. 2The machine translation project was followed by another project in the field of natural language processing. This was intended to take machine shorthand and convert this into readable English. The project was based on the Palantype system of mechanical shorthand which employs a machine with a 29-key keyboard; the keys are arranged in groups of left consonants, vowels and right consonants and the recording system follows fairly closely the syllabic structure of words, though some shorthand 'chords' cover more than one syllable. The normal output of the machine is a strip of printed paper from which a transcription is made by a human operator, a time-consuming operation which the NPL project aimed to obviate.The Palantype machine was adapted at NPL to permit data input to the KDF9 computer. The next task was to create a Palantype-to-English dictionary. A suitable word list was obtained and this was Palantyped and recorded on punched paper tape. From this the machine dictionary was built, taking into account the syllabic structure of the Palantype system. As with the Russian translation system, it was sometimes necessary to include more than one English equivalent; these arose because of homophones.In action the transcription system accepted a stream of Palantype chords and looked these up in the dictionary to find the English equivalents. Because word boundaries were rarely explicit in the incoming stream, the process of dictionary lookup had the additional task of trying to identify word boundaries; this was done on the principle of 'longest-match', which usually (but not always) gave the right result.The English output from the transcription system was printed on a fast character printer. Where dictionary lookup had failed to find a match, the process took the Palantype chords and transliterated these, often giving a readable equivalent, just as was done with the Russian translation. Given an error-free input, the transcription system performed very well, but fast operation of the keyboard at normal recording speeds resulted in a large number of input errors with which the system could not cope adequately. Before the project ended techniques were being developed which introduced a degree of error correction into the input text, leading to better-quality output. An account of the NPL Palantype transcription project may be found in Price. 3Though the NPL project ceased in 1970 after four years work, it is pleasant to be able to report that others have since built on this work (4, 5, 6), leading to useful applications in subtitling television programmes and in aids for the deaf. These later developments have made use of microprocessor systems, which were not available at the time of the NPL project.Both the Russian translation and the Palantype transcription systems depended on very large computers, and this diminished their practical application. The Russian translation system was essentially a batch process, whilst the Palantype system could operate either online or in batch mode. Batch processing was the norm for the vast majority of early computer applications, with jobs being submitted in the form of packs of program and data cards (or tape) to machine operators. Computers usually handled only one job at a time.As computer technology developed, so we saw the introduction of time-sharing, in which more than one task at a time was handled by a machine, each task getting its turn under the control of the computer operating system; thus the main processor of the machine might be occupied with one job, whilst a backing-store data transfer was going on for another job. With the possibility of running multiple tasks the need for presenting jobs in batch form became less insistent, and the computer users were able to prepare programs and submit jobs from terminals without directly involving the computer operators; results were often delivered to the user terminals. This development, with users becoming increasingly remote from the computers, led to exciting innovations in communication technology.The level of innovation increased enormously when communication systems were required to allow computers to communicate with each other and not just with remote terminals. The era of distributed processing has meant that some of the more complex computational tasks are now performed using computing capability located at more than one site, with co-operating computers and databases.A possible way to connect a number of user terminals to a computer mainframe is to provide each with a separate electrical connection, or line, and bring these lines into the computer centre, where a front-end processor has the task of handling them. Indeed, this is how the first terminal systems were connected. For a few terminals installed within a limited site the method is feasible. However, for a large number of terminals installed over a wide area the quantity and cost of dedicated hardware becomes frightening.It is therefore necessary to look for alternatives which will allow some sharing of systems hardware between terminals. One very familiar system in which many people share communication hardware is the public switched telephone network (or PSTN), which has been with us for many decades. For most of its lifetime this system has supported only voice communication, but within the last twenty-five years or so it has been increasingly used for data also. For this purpose data is converted into sound pulses at the transmitter and back into data waveforms at the receiver; the device which carries out this function is known as a 'modem' or modulator-demodulator.Unfortunately there are severe limitations on the data capacity of the traditional telephone network, which have restricted its usefulness in computer communications. Another important limitation arises from the time taken to set up calls in the PSTN. Where computers (or terminals) need to communicate sporadically with each other, using bursts of data, it is very inefficient to set up a fresh call every time a data burst must flow; the exchange and line allocations are held for the full duration of a PSTN call and will be underused for bursty data applications. A typical application of this kind is the modern automatic teller machine, seen outside banks, or the point-of-sale terminal which will soon be with us. On the other hand the switched system may well be able to handle large quanitities of data in a continuous flow, albeit at fairly low data rates. There is clearly a need for an alternative communication mode. Such an alternative system is now with us in a fullydeveloped form and its inception is largely due to another initiative from NPL. In 1965 a group under Davies developed the concept of 'packet-switching' as an alternative to the traditional 'circuit-switching' of the PSTN. The PSTN consists of a large number of exchanges joined by a network of lines. Dialling a call in a circuit-switched system causes a path to be found from source to destination user, with each exchange along the path taking its part in establishing the chosen route; the path is then reserved for the duration of the call and the data flows along the reserved path, line and exchange hardware being dedicated to this task.In contrast, a packet-switched system requires that data be segmented into units called 'packets'; each packet carries a header field in which are recorded such parameters as source and destination designation, packet number, etc. The packet-switched network consists of a number of nodes (small computers) joined by a network of lines. As a packet arrives at each node on its journey from source to destination, so the packet header is inspected and the node decides on which output line the packet shall leave. When the packet arrives at the destination node it is delivered to the user. Packet switching may take many forms; routing may be adaptive, taking account of changes in network topology or in loading levels, or it may be fixed, in which case packets between particular sources and destinations always take the same paths; flow control of packets may depend on end-to-end exchanges of control signals, or it may be based on node-to-node control or some combination of these procedures. Error detection and recovery is an important feature of these systems; again this may be based on end-to-end or node-to-node procedures.NPL developed the concept in the late 1960s and soon had its own on-site packet-switched network in operation. This provided access within the Laboratory to services and terminals; at first it was run on an experimental basis, but soon it came to be relied upon as an essential feature of the computing systems of the Laboratory. In addition to giving access to the central computing facilities at the Laboratory, the network gives access to other systems such as the word processing facilities, Scrapbook and Edit. The network at NPL is still in operation, the present system being a direct development from the first. We see thus that NPL had its own packet-switched local area network as much as fifteen years ago. A description of the NPL network may be found in Davies and Barber. 7At the same time packet-switching developments began in the United States with the network of the Advanced Research Projects Agency (ARPA) of the Department of Defense. This rapidly expanded from four nodes in California to sixty-four nodes covering the whole of the United States and beyond. Since that time many more networks have come into existence and carry an immense worldwide load of data traffic. Commerce and industry have come to rely very heavily on network systems.During the first years of development all that the different networks had in common were the basic concepts.Any intercommunication between networks had to be achieved by means of specially designed 'gateway' nodes. This was largely because there was no commonality of procedures or of data formats. At the gateways the protocols of one network were translated into those of the next. This was clearly a very undesirable state of affairs, because ad hoc gateways were required between all network pairs needing to communicate.The remedy for incompatible networks is to establish a set of standards to be followed by all network designers. Standards for networking began to emerge quite early. One of the first was British Standard 4421, which defined a 'digital input-output interface for use in data collection systems'. The most significant source of recommendations in the field of data communications has been the International Consultative Committee for Telephones and Telegraphs (CCITT), part of the International Telecommunication Union (UIT), itself an agency of the United Nations; the CCITT V series of recommendations is concerned with data communication via the telephone system; the CCITT X series of recommendations is concerned with the new data networks. Though going under the name of 'recommendations', the CCITT documents have almost the same status as formal standards; they are followed closely by the national PTT (Post, Telegraph and Telephone) organisations.The emphasis of the CCITT recommendations is towards the hardware of the networks, so that one finds recommendations for signal levels on lines or for methods of error correction on point-to-point connections; one does not find many recommendations for protocols for allowing meaningful and efficient communication between one application program and another.In order to achieve effective communication between computers or between terminals and computers, it is necessary to define a hierarchy of protocols. For example, there will be a protocol for data exchange between a terminal and the network node to which it is connected; there will be another protocol to establish reliable communication between source and destination hardware. Recently the International Standards Organisation through its Technical Committee 97, responsible for data communications, has created an architectural model, known as Open Systems Interconnection (OSI), whose function it is to provide an effective framework into which the individual standards can be slotted. The object of OSI is to allow inter-networking in the broadest sense. The architectural model has seven layers, ranging from the lowest (Physical) layer to the highest (Application). The Physical layer includes the mechanical and electrical characteristics of point-to-point connections, whilst the Application layer provides for interfacing to user application programs.Definition of standards at the various OSI layers is proceeding apace; broadly speaking the progress is bottom upwards, with many of the CCITT recommendations forming the basis of international standards.Though we may not be aware of the fact, many of us already use communication network facilities. Bank automatic teller machines rely upon network connections for reference to customers' accounts. Soon we shall see point-of-sale terminals appearing in the shops, so that 'plastic money' starts replacing cash for many transactions. These too will rely upon data networks to make the money transfers that are consequential upon purchases.The banks already rely on data networks for internal banking transactions. Soon we shall see the CHAPS network (Clearing Houses Automatic Payments System) coming into action for very large inter-bank money transfers within the City of London, whilst the SWIFT network has carried international inter-bank transactions for a number of years.Industry too relies heavily upon networking for communicating between various headquarters, offices and production plants.Office automation will be greatly facilitated by the Teletex service, offered by British Telecom and a number of other PTTs. This is a document handling system, defined by the CCITT, which allows sophisticated document formatting and transfer between users. Teletex, not to be confused with teletext, could be described roughly as a 'super-telex' service; indeed it is intended to work alongside the telex system and to inter-work with it.To the domestic user the teletext service (Ceefax and Oracle), available without additional charge on television, represents a primitive data information system; interfaces are now available between teletext and personal computers, so that programs and data may be loaded direct. At a more sophisticated level we have the Prestel system, also displayed on TV sets, but requiring a telephone connection; Prestel allows the user to operate in an interactive mode, so that, for example, goods may be ordered from advertisers. Already home banking is a possibility to a limited extent via Prestel, and may be expected to become more widely available in a short time.There is already considerable international interest in the concept of an Integrated Services Digital Network. British Telecom plans to introduce such a service in late 1983. (8) This network will provide customers with a variety of new services and facilities, many of which are made possible only by the increased bandwidth provided by a wholly digital connection.Some of the facilities and services we have mentioned use the older established media such as the telephone, but most of them would not be possible were it not for the development of the digital computer. It is truly astonishing that all this has come about within the thirty-eight years that have elapsed since the end of World War II. The pioneers of computing, imaginative people though they were, can have had little conception of the developments that would result from their work. Appendix:
null
null
null
null
{ "paperhash": [ "davies|communication_networks_for_computers", "mcdaniel|an_evaluation_of_the_usefulness_of_machine_translations_produced_at_the_national_physical_laboratory,_teddington,_with_a_summary_of_the_translation_methods" ], "title": [ "Communication Networks for Computers", "An evaluation of the usefulness of machine translations produced at the National Physical Laboratory, Teddington, with a summary of the translation methods" ], "abstract": [ "Preparing the books to read every day is enjoyable for many people. However, there are still many people who also don't like reading. This is a problem. But, when you can support others to start reading, it will be better. One of the books that can be recommended for new readers is communication networks for computers. This book is not kind of difficult book to read. It can be read and understand by the new readers.", "evaluation of the usefglness of machine translations nroduce~ at the National Physical Lab.oratory. Tea diD~ton, with a summary of the translation method. ~ ~ ~./~ l / Introdnction / The machine translation project at the National Physical Laboratory (NFL) has b?en terminated. It has always ha~ as its prime aim a demonstrat~n of the practicability of translation by computer of Russian scientific texts into En@lish. In order to test how far this aim has been fulfilled and further, to provide evidence to 6~i~e a potential agency intereste~ in givin~ a machine translation service, we he.carried out an evaluation experiment on our translations, the conditions of which as far as possible emulated those of a translations service. The results of this experiment are presented in this paper, together with a statuary of the translation methods used. The paper as a whole will thus give an independent presentation of \"what methods produced what results\". For a comprehensive account of the NFL translation techniques, see reference I. Evaluation of Translations We have been concerned with the translation of scientific Russian texts only. In considering how we might evaluate the results of our work, the context of use of scientific translations imposed two main constraints. Thus, firstly, in the vast m~jority of cases we woul~ expect readers of translations to be themselves experts in the subject matter of the material translated, i.e. they would be reading the translations because these reflect their main professional responsibilities. We may then expect that the inherent background knowledge of such readers will ensure a hiKh impetus to their comprehension of translations and help them through syntactic awkwardnesses and multiple-meshing choices. We would also expect that only a small peroenta6e of these readers would have any competence in Russian. Secondly, the items of translation being read by the above typical readers will normally be whole infor.~tion units (journal article, chapter of book, abstract, review, &c.), and they will have the freedom to ignore unimportant sections of such units an& to use sentence or paragraph context (or even remoter references) to help elucidate obscure sections. More specifically, a particular sentence may be poorly translate~, but because the reader can see that this is not an important sentence or because the context of (hopefUlly, better-translated) neighbourin6 sentences clarifies its meaning, that sentence may not affect at all an adequate comprehension of the whole." ], "authors": [ { "name": [ "Donald Watts Davies", "D. L. Barber" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. McDaniel", "W. L. Price", "A. Szanser", "D. M. Yates" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "60799458", "40977066" ], "intents": [ [], [] ], "isInfluential": [ false, false ] }
Problem: The paper aims to trace the computer research program at NPL, from early computer development to various applications, including natural language processing and computer networking. Solution: The hypothesis of the paper is that the research conducted at NPL in the fields of machine translation and natural language processing, as well as the development of packet-switching networks, has significantly contributed to the advancement of computer communication technologies and services.
495
0
null
null
null
null
null
null
null
null
b3a3dd9fccf638eb82b3de065ff9ac72c49e7559
237295816
null
Doing It the other way
The paper explains how -given appropriate specialisation, skill and speed -translators can as yet make a very good living and produce acceptable presentation without the need for sophisticated equipment. Modern typewriters and dictating machines can still prove adequate under certain circumstances.
{ "name": [ "Hayes, John" ], "affiliation": [ null ] }
null
null
Proceedings of Translating and the Computer 5: Tools for the trade
1983-11-01
0
0
null
If this were show-business, one could be excused for coming up with the expression 'Now follow that' in reference to the comments of the previous speakers. If I ever had any wind in my sails, there is not much of that left now. Perhaps the idea of giving me this spot was to keep the audience awake at tea-time simply by being controversial, so here goes.One could easily get the impression that any translator attempting to survive without data banks, word processors or machine translation is doomed. There are, however, quite a number of us who, by their continuing survival without, as yet, any of the new gadgets, prove otherwise. Indeed, it can still be done the other way.It was originally my intention to quote at this point certain developments which have taken place in the field of numerical machine tool control. This is a field in which data processing has long since been much more widely established than in word handling and about which I am more knowledgeable than data processing in general. In this particular area there are signs indicating a slight movement away from the original intention of total automation towards a gentler, 'people-involving' concept. I am mentioning this merely to indicate that computerisation does not necessarily have to go on revolutionising. A reversal or at least a retardation of the revolution is not only possible but quite likely; as I Tools for the Trade, V. Lawson (ed.) . © Aslib and John Hayes.. have just said, it has already happened in the machine tool industry. When I told a friend of mine that I would make this point, he suggested that I was ducking the issue when it was clearly my age that prevented a readier acceptance of the new trends. There is probably something in that, as there is an in-built reluctance in some or all of us to accept anything with which we did not grow up. However, to show that this is not the only reason for being a stick-in-themud, let me go back a bit.There was a time when translators were simply expected to produce translations in some reasonably legible form, and the translation user took it from there. When translating became recognised as a 'growth' business (a sad and slightly deplorable fact of life), presentation became rather more important. So some of us moved with the times, joined the trend and began to use more sophisticated typewriters. Now we seem to be moving into an era where presentation is all-important, even if it hides a far from perfect product. There is a risk that the wrapper is becoming more important than the contents. This must be said simply to protect translators who produce excellent work by old-fashioned means against poor work turned out on pretty-picturemaking facilities. It does not, of course, suggest that using a word processor or some such equipment precludes the production of excellent translations.Many translators feel that they work best by translating on tape and by employing typists. There can be little doubt that this is the most productive method if the translator is able to edit the tape while translating so that only a single typing and reading stage is involved. No other method is likely to be able to compete with this technique at present. Self-typing -even on a processor -has to be slower because the translator/typist has to 'stop and think' and because, in any event, typing is slower than talking. The ratio of typing to translating time is between 1:2 and 1:3, i.e. a good translator working at a good speed on tape for ten hours will produce work requiring twenty to thirty hours to type. (This assumes that the copy typed is not absolutely straight-forward page-for-page text but involvesas is mostly the case -certain layout problems.)If one accepts that fast tape translating and audio typing is one step towards speed and efficiency without data processing, there is another ingredient to achieve rapid and successful output, and that is specialisation. It avoids the time-consuming research and dictionary wielding that holds up otherwise speedy translators.Yes indeed, there are quite a few translators who (A) dictate on tape and once only (B) have one typescript typed once only (C) have the specialisation skills.Referring to these people as 'ABC' translators, would they or would they not benefit from word processors, data banks, etc.? And at what cost could such benefits be achieved?The common situation with ABC translators is that they run an office at home and use one or several outside typists. In my own case, there are three typists at three different locations. The installation of word processing facilities would therefore involve four systems and one printer. The typists would make no hard copy at all and the correcting and printing would take place in my office. The investment involved would at present be some £12-15,000 (tax deductible). The main benefit would be the elimination of a certain amount of aggravation caused by typing errors either uncorrected or corrected on the top copy only (the latter the result of another 'desirable' gadget, the correcting typewriter). The WP would certainly make such corrections easier, and a letter-quality high-speed printer would often take the place of the photocopier.Naturally, the WP could rearrange layouts, justify and do other gymnastics which would no doubt turn out more beneficial than anticipated.The biggest benefit and the one that may in due course induce me to go for at least two WP set-ups has nothing to do with translating. It is the fact that the computer could handle my accounts and invoicing. As this involves total rearrangement of any existing successfully operating accounts system, doubt raises its ugly head even in this department. A computer consultant recently said to me: 'If you have only 200 to 300 transactions a year and if you can write faster than you can type, forget about data processing'.At present, the ABC translators do not have that much to gain from WPs and, if truly specialised, perhaps not a lot from data banks either. A home-made memory-jogging dictionary or the facility for building a brief glossary for the work in hand to avoid using different names for the same thing (often also a fault of the source text, incidentally) could be a boon, but takes quite a lot of time to establish for a 'one-off translation. A written list of notes does the same thing quite easily and a lot more quickly.I am beginning to sound like one of those chaps that would put the kibosh on any party, but would again like to remind you that this is only in reference to ABC translators who may in fact be a minority. There are indications however that their number is increasing.A frequently quoted advantage of WPs is that they allegedly eliminate much hard copy. Once in a while clients suggest that they would like translations on disks. The question is: whose?There are so many variants that whatever system a translator buys will only satisfy some of his clients. Standardisation seems a long way off. Next we have to consider the maintenance of the equipment. It can create not only expense and problems butmuch more serious -hold-ups. Maintenance contracts are costly and the small print needs watching.There is little doubt, however, that the main factor at present holding other and especially the ABC translators back from investing in (in particular) WPs is cost.A pocket calculator so basic no-one would buy it today cost £30 in 1970, equal to some £120 now. A similar but much improved calculator now costs about £8. If the same trends can be expected in the WP field, prices will come tumbling down soon enough. The computer and the display -the heart of any WP system -are fairly inexpensive already. It is the mechanical engineering bits that cost the money, i.e. the disk drives and the printer. Things will change. At present, ABC translators have too little to gain for the cost involved. For self-typists a WP has much more to offer.When the equipment becomes cheaper, even the ABC translators will not be able to resist the temptation and we shall all learn the new skills, no matter how reluctantly.In the meantime, a few good dictation machines (totally reliable, my five have had no attention whatsoever in five years), a few good electric/electronic typewriters (not quite so reliable but not bad -and a stand-by machine so affordable), one or several good typists, and an ABC translator can produce around one million words of translated copy per annum. That, depending on the rate collected, can represent a turnover of between £30,000 and £60,000 per annum. The overheads would be fairly low in view of the simple and inexpensive equipment.While the WP and other aids may make life easier for the ABC translator, they are unlikely to raise his output or income. Hence the reluctance to invest. The one machine that I am personally waiting for, and that would certainly change my attitude both to relearning and to spending, would be a machine that could convert the spoken word into a screen display only requiring simple keyboard corrections of spelling mistakes before printing. Such a machine would be worth having at almost any price, as it eliminates many of the problems at one stroke and makes the translator totally independent and mobile without turning him into a typist (with the time losses this involves). AUTHOR John Hayes, Managing Director, Hayes Engineering Services, Clover House, Parrotts Close, Croxley Green, Hertfordshire, WD3 3JZ, UK.
null
null
null
null
Main paper: : If this were show-business, one could be excused for coming up with the expression 'Now follow that' in reference to the comments of the previous speakers. If I ever had any wind in my sails, there is not much of that left now. Perhaps the idea of giving me this spot was to keep the audience awake at tea-time simply by being controversial, so here goes.One could easily get the impression that any translator attempting to survive without data banks, word processors or machine translation is doomed. There are, however, quite a number of us who, by their continuing survival without, as yet, any of the new gadgets, prove otherwise. Indeed, it can still be done the other way.It was originally my intention to quote at this point certain developments which have taken place in the field of numerical machine tool control. This is a field in which data processing has long since been much more widely established than in word handling and about which I am more knowledgeable than data processing in general. In this particular area there are signs indicating a slight movement away from the original intention of total automation towards a gentler, 'people-involving' concept. I am mentioning this merely to indicate that computerisation does not necessarily have to go on revolutionising. A reversal or at least a retardation of the revolution is not only possible but quite likely; as I Tools for the Trade, V. Lawson (ed.) . © Aslib and John Hayes.. have just said, it has already happened in the machine tool industry. When I told a friend of mine that I would make this point, he suggested that I was ducking the issue when it was clearly my age that prevented a readier acceptance of the new trends. There is probably something in that, as there is an in-built reluctance in some or all of us to accept anything with which we did not grow up. However, to show that this is not the only reason for being a stick-in-themud, let me go back a bit.There was a time when translators were simply expected to produce translations in some reasonably legible form, and the translation user took it from there. When translating became recognised as a 'growth' business (a sad and slightly deplorable fact of life), presentation became rather more important. So some of us moved with the times, joined the trend and began to use more sophisticated typewriters. Now we seem to be moving into an era where presentation is all-important, even if it hides a far from perfect product. There is a risk that the wrapper is becoming more important than the contents. This must be said simply to protect translators who produce excellent work by old-fashioned means against poor work turned out on pretty-picturemaking facilities. It does not, of course, suggest that using a word processor or some such equipment precludes the production of excellent translations.Many translators feel that they work best by translating on tape and by employing typists. There can be little doubt that this is the most productive method if the translator is able to edit the tape while translating so that only a single typing and reading stage is involved. No other method is likely to be able to compete with this technique at present. Self-typing -even on a processor -has to be slower because the translator/typist has to 'stop and think' and because, in any event, typing is slower than talking. The ratio of typing to translating time is between 1:2 and 1:3, i.e. a good translator working at a good speed on tape for ten hours will produce work requiring twenty to thirty hours to type. (This assumes that the copy typed is not absolutely straight-forward page-for-page text but involvesas is mostly the case -certain layout problems.)If one accepts that fast tape translating and audio typing is one step towards speed and efficiency without data processing, there is another ingredient to achieve rapid and successful output, and that is specialisation. It avoids the time-consuming research and dictionary wielding that holds up otherwise speedy translators.Yes indeed, there are quite a few translators who (A) dictate on tape and once only (B) have one typescript typed once only (C) have the specialisation skills.Referring to these people as 'ABC' translators, would they or would they not benefit from word processors, data banks, etc.? And at what cost could such benefits be achieved?The common situation with ABC translators is that they run an office at home and use one or several outside typists. In my own case, there are three typists at three different locations. The installation of word processing facilities would therefore involve four systems and one printer. The typists would make no hard copy at all and the correcting and printing would take place in my office. The investment involved would at present be some £12-15,000 (tax deductible). The main benefit would be the elimination of a certain amount of aggravation caused by typing errors either uncorrected or corrected on the top copy only (the latter the result of another 'desirable' gadget, the correcting typewriter). The WP would certainly make such corrections easier, and a letter-quality high-speed printer would often take the place of the photocopier.Naturally, the WP could rearrange layouts, justify and do other gymnastics which would no doubt turn out more beneficial than anticipated.The biggest benefit and the one that may in due course induce me to go for at least two WP set-ups has nothing to do with translating. It is the fact that the computer could handle my accounts and invoicing. As this involves total rearrangement of any existing successfully operating accounts system, doubt raises its ugly head even in this department. A computer consultant recently said to me: 'If you have only 200 to 300 transactions a year and if you can write faster than you can type, forget about data processing'.At present, the ABC translators do not have that much to gain from WPs and, if truly specialised, perhaps not a lot from data banks either. A home-made memory-jogging dictionary or the facility for building a brief glossary for the work in hand to avoid using different names for the same thing (often also a fault of the source text, incidentally) could be a boon, but takes quite a lot of time to establish for a 'one-off translation. A written list of notes does the same thing quite easily and a lot more quickly.I am beginning to sound like one of those chaps that would put the kibosh on any party, but would again like to remind you that this is only in reference to ABC translators who may in fact be a minority. There are indications however that their number is increasing.A frequently quoted advantage of WPs is that they allegedly eliminate much hard copy. Once in a while clients suggest that they would like translations on disks. The question is: whose?There are so many variants that whatever system a translator buys will only satisfy some of his clients. Standardisation seems a long way off. Next we have to consider the maintenance of the equipment. It can create not only expense and problems butmuch more serious -hold-ups. Maintenance contracts are costly and the small print needs watching.There is little doubt, however, that the main factor at present holding other and especially the ABC translators back from investing in (in particular) WPs is cost.A pocket calculator so basic no-one would buy it today cost £30 in 1970, equal to some £120 now. A similar but much improved calculator now costs about £8. If the same trends can be expected in the WP field, prices will come tumbling down soon enough. The computer and the display -the heart of any WP system -are fairly inexpensive already. It is the mechanical engineering bits that cost the money, i.e. the disk drives and the printer. Things will change. At present, ABC translators have too little to gain for the cost involved. For self-typists a WP has much more to offer.When the equipment becomes cheaper, even the ABC translators will not be able to resist the temptation and we shall all learn the new skills, no matter how reluctantly.In the meantime, a few good dictation machines (totally reliable, my five have had no attention whatsoever in five years), a few good electric/electronic typewriters (not quite so reliable but not bad -and a stand-by machine so affordable), one or several good typists, and an ABC translator can produce around one million words of translated copy per annum. That, depending on the rate collected, can represent a turnover of between £30,000 and £60,000 per annum. The overheads would be fairly low in view of the simple and inexpensive equipment.While the WP and other aids may make life easier for the ABC translator, they are unlikely to raise his output or income. Hence the reluctance to invest. The one machine that I am personally waiting for, and that would certainly change my attitude both to relearning and to spending, would be a machine that could convert the spoken word into a screen display only requiring simple keyboard corrections of spelling mistakes before printing. Such a machine would be worth having at almost any price, as it eliminates many of the problems at one stroke and makes the translator totally independent and mobile without turning him into a typist (with the time losses this involves). AUTHOR John Hayes, Managing Director, Hayes Engineering Services, Clover House, Parrotts Close, Croxley Green, Hertfordshire, WD3 3JZ, UK. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
495
0
null
null
null
null
null
null
null
null
706dbb95eef5a65c0360862818b31a97b016b773
237295802
null
The benefits from handling translations electronically
An examination of some of the equipment available and how it can be integrated into an office system, with a look at some of the benefits; also, management systems and how they can affect different areas of the company.
{ "name": [ "Barber, Peter" ], "affiliation": [ null ] }
null
null
Proceedings of Translating and the Computer 5: Tools for the trade
1983-11-01
0
0
null
Ten years ago, few translators had electric typewriters, even fewer used carbon film ribbons. The industry standard was the manual machine, draft typing was commonplace and even manuscript was accepted. Like the aeroplane and the motor car, typing has developed extremely rapidly in a short time. It is hardly surprising that the users are bewildered by the choice of similar-but-different machinesnot only the wide range of manufacturers, each with his range of models, but even the different types of equipment which are available.My intention here is not to offer comparisons, nor really to offer specific advice. What I hope to do is to examine the various sorts of machines which can be of benefit to the translator, and end with some of the reasons which guided my own company's choice, as an illustration of our requirements and how we sought to satisfy them.Let us start with a basic assumption that you are a translator and are in the market for some electronic equipment. At this stage, it does not matter whether you are an individual, a translation company or a translation department within a large company. The distinction comes later in the process, when you consider scale, and type of equipment, spending power and return on investment.We have already come to the first problem. As a broad statement, equipment manufacturers were not -until relat-Tools for the Trade, V. Lawson (ed.) . © Aslib and Peter Barber, ively recently -aware of the particular needs of the translation world. Many, I am sure, felt it safer not to dabble in those waters.An alternative to doing your own evaluation is to seek the advice of the experts. Another set of problems, in that the expert will probably also suffer from lack of awareness of the translators' specific problems; he will learn (no doubt, at your expense) and take his questions back to manufacturers...who are also not aware. Furthermore, with the best will in the world, no consultant can be totally unbiased, nor can he have the depth and width of knowledge without extensive research.It may be of interest to know that the British government once felt the need to offer grants to companies to pay for an evaluation of computerisation within their organisation. We applied, having been quoted £2,000 for a study to be carried out, and were refused on the grounds that we were not a manufacturing company (that scheme was called MAPCON). It is a significant coincidence that the grant limit was also £2,000.After that, and several other experiences, we decided to use our own pooled ability within the company to decide first, what we wanted, and second, which equipment would do the job with the smallest amount of compromise.The second part of this conference is on machine translation. I am interested, but not with a view to investing in the foreseeable future. We have looked at the cost of equipment, results and cost of post-editing, taking as guidance information gained from a previous Aslib and Translators' Guild conference. We shelved the idea, because we could not guarantee sufficient throughput per language, per subject, per year, because the costs of input, MT and post-editing exceeded our normal costs, and because we did not want to afford the cost of acquisition. We even considered the possibility of time-sharing with other companies, but realised that we would all suffer from the need to establish priorities -which is difficult enough to achieve even when everything is under one's own control.What does that leave us? 1. Telephone answering machines 2. Telex 3. Facsimile 4. Word processors (WP) and electronic typewriters 5. Communications 6. Optical Character Recognition (OCR).Some of the applications of this equipment have been formalised with jargon descriptors or system names, such as local area networks and Teletex, and may be of interest, but mainly to organisations.Our company works normal business hours, five days a week; we close on English Bank Holidays, and, as policy, keep open with a skeleton staff between Christmas and New Year. There is nearly always someone in the office at least half an hour before and after normal hours. Outside that, if it really cannot wait until the next day, I am in the Translators' Guild Index and can usually be contacted; it is nearly always translators who ring me at home, and I prefer to talk and sort out the problem rather than to cause delay, doubt or even error for lack of contact. With that availability, which covers most needs, we have never felt that a telephone answering machine was an essential piece of equipment for us; the same or similar arguments would apply to most organisations. However, I can see that an answering machine would be of great benefit to an individual translator, who may miss out on a large job simply because he couldn't answer the telephone when it rang. There are certainly others better able than I to advise or comment on answering machines, and I know that very informative and helpful surveys of what the market offers have been published. I also know a fair number of translators who have them -I remember one who used to leave his machine on permanently, and check it for calls every half an hour or so; he would then sort calls into priority order depending on the messages and ring those he wanted to talk to, but only when he was ready. It increased his telephone bill, but meant he was only interrupted when he wanted to be, not in the middle of something urgent, difficult, or important. I derive a certain amusement from listening to the messages people leave for callers but sometimes wish, when I am trying to place an urgent translation, that I could have some idea of when 'as soon as possible' might be. Having said that, of course, I can see you don't want to tell a potential burglar that you will be away until next Tuesday.The telex machine has been around for years, and ours still gets a lot of use, even if we do get the occasional request to translate something into Greek or Arabic and send it back by telex. Having decided that there was a sufficient need for us to keep one, as part of our 'global' philosophy, we looked at equipment.With the old-style telex machines, there were two major problems: no matter how urgent your message, an incoming call always interrupted your tape preparation, usually spoiling the tape; there was a limited and inefficient method of preparation, using the slow and specialised telex keyboard.There were five solutions open to us:1. put up with the situation (cheap, but did not solve the problem);2. acquire another telex terminal (this would have been possible but it was still a one-purpose slow unit);3. acquire an electronic system (this was efficient but required special operator training, and cost significantly more); 4. link telex preparation into the word processor (again, viable, but it meant that most outgoing telexes had to be prepared by the WP operators -specialist training, conflict of priorities, and the telex bottleneck would then also affect typing load);5. telex cutter. This is what we chose. It is an electronic box which is connected to a normal golfball typewriter and produces a telex tape.For our purposes, it had four advantages: (a) it was relatively cheap to buy outright (about £1,200 installed);(b) the typewriter could still be used as an ordinary typewriter, even to the extent of producing a telex tape with a proper confirmation typescript, simultaneously;(c) any typist could use it, including the two-finger variety;(d) it prepared standard telex tape at typing speeds, with a buffer memory to allow sensed error correcting.I accept that our solution is not necessarily everyone's answer, and it may be that someone can fault our arguments sufficiently to show that we made a wrong decision. Fair enough, in that case we shall have served as an example whereby others do not make the same mistake -isn't that what learning and sharing knowledge is all about?Despite all the modern electronics, telex still serves a useful function. We find telex useful for quick communication between ourselves and our customers, and also for translations up to a maximum of about 1,000 words or 150 lines. Most telex messages are not longer than 300 words or 40 lines. The important drawback in translation by telex is that you cannot transmit accents except by using conventions which get the message across but are tedious to follow; ideally, both communicators need to know the language, or the circumstances must be such that any accents can genuinely be ignored.Incidentally, we find it of great benefit to leave the tape punch on permanently. This enables us not only to retransmit the telex but also to rerun it if, for example, the paper jams in the machine. I have one customer who has taken this a step further: at our suggestion, his telex operators have a standing instruction that, if the message is 'in foreign', they automatically retransmit it to us; the first time the addressee sees it is when our translation appears on his desk attached to the original.I am still surprised at finding resistance to telex -I have one customer who regularly makes a 15-mile round trip by car with incoming messages which his operators refuse to keyboard and which cannot wait for the post. What he needs is facsimile.Fortunately for users, there is an international specification for these machines (otherwise known as telecopiers or telefax machines), which means that even if they have their own specific operating mode, they can communicate with any other machine of the same CCITT group. The old original telecopier is now labelled as a CCITT Group 1 machine; most users that I know have Group 2 transceivers, which give a transmission time of 3, 4 or 6 minutes for an A4 page. They are operable manually or with optional automatic reception. Group 3 machines are the latest standardised units on the market, intended for the high-volume user, and have transmission times per page measured in seconds (60 or less) rather than minutes.Most, if not all, Group 2 machines have the ability to talk to Group 1 and a large number of Group 3 machines can communicate with their lesser brethren in Group 2.Facsimile has certain significant advantages over telex, but without being a total substitute. It is possible to transmit diagrams and original documents without needing to transcribe them, so the overall process is quicker, guarantees fidelity to the original and in the long term probably works out cheaper than telex. In this respect, facsimile transceivers are better than communicating word processors. It is also possible, of course, to transmit any language, whether it uses ideograms or non-Latin scriptprovided that the characters are large enough and clear enough to be legible after transmission (remembering that noise on the telephone line will appear as black dots and streaks). Almost certainly, the major disadvantage is in the comparatively small number of users; as a routine with new clients, I ask if they have facsimile, and am surprised at how many ask 'what's that?'. Yet the industry is already working towards a Group 4 standard.*Without a doubt, the most significant piece of equipment for any translator is the means of presenting his work. As I said at the beginning, ten years ago the norm was a manual typewriter with fabric ribbon, and only a handful of translators offered work on electric machines with carbon film ribbon.Now, the handful have screen-based dedicated word processors, another handful or so have microcomputers with a WP capability, a further handful have electronic typewriters, and the predominant method is now, it would appear, the electric typewriter. The 'trusty (rusty?) manual' is now very definitely down-market in terms of quality of appearance of work and ease of manipulation of text; not to mention typist's fingers as the translator's equivalent of barmaid's elbow.So, where to aim, which type of equipment do we look for? Obviously, finance will play a large part in anybody's decision, and we are now fast approaching the point in this paper where the individual has a lesser need than his corporate counterpart; a translation company or department may arrive at the same decision as the individual translator, but want more units.The range of machines is so vast and varied that I am not even going to attempt to discuss them in detail. * Since this paper was prepared, our company has upgraded to Group 3 facsimile with Group 2 talkdown. Interesting to note that at Group 3 speed it is quicker, more reliable and cheaper to transmit a one-page document by facsimile than by post. The quality of reception is also good enough to retransmit to a Group 2 machine with acceptable legibility. The choice between manual, electric and electronic typewriters, with fixed basket typeface, golfball, thimble or daisy wheel, compared with microprocessor or dedicated word processor, is the first level of choice of any potential purchaser; he then has to examine all the offerings of all the manufacturers of equipment of the type he has decided to buy, and weigh the various operating benefits of each Against a cash budget. The nearest comparison I can offer is buying a car, where you first spend ages reading the brochures, then walk the showrooms, perhaps with a test drive or two, followed by price-hunting, comparison of specifications to see which is best value; do you want 1.1, 1.3, 1.6, 2 or 2.3 litre, in the Basic, L, GL or Ghia trim? After more decision-making on the choice of colours, off you go to the showroom for the last time and -'I'll have that one, because it's a nice colour and I can have it tomorrow'. Fortunately, as yet, a choice of colour does not appear to be a major consideration in electronic office equipment. When we as a company wanted to update our equipment and first started looking four years ago, I admit we were naive on the subject. For the most part, we had to use our own judgement, since WP and translation were new and unaccustomed bedfellows, and -I must be honest -a lot of manufacturers not only could not give us answers, but also had never even thought of some of the questions. As simple and quite recent examples, we had to buy Greek daisy wheels from a firm in Honolulu because that was the only place we could find them; Portuguese also proved a problem, because the accents on a normal daisy wheel are not high enough to be used on the upper case letters, even if you can superimpose them.* Looking back, I am amazed that our path into word processing was so smooth. I can remember, before we had full accent capability, that we used to have to add certain accents using one of our obsolescent golfball machines.Let me, very briefly, give some of the thoughts which led us to dedicated word processors:1. Electronic machines were a step up from electric, but the mini-memory available then was virtually worthless, so the cash difference really went on the buffer memory for sensed error and on the improved presentation and layout capabilities.2. For a bit more, you could have a magnetic card memory which became infinite, but still without any VDU.3. Micros -at that time -offered word processing in English and were still hesitantly mastering that level. A bit nearer to the ideal, but the thought of multiple conventional key operations also dissuaded us. 4. So it had to be a minicomputer at least.There remained only one other major decision and that was to compare stand-alone and central processor, bearing the future in mind.A central processor with individual terminals gave flexibility and the possibility of future expansion.On the other hand, if the CPU or printer failed, we would be stuck.However, applying our now habitual policy of redundancy (i.e. duplication of capability), two totally independent stand-alones would better meet our needs, since the breakdown of one would still enable us to shuffle priorities on the second machine and satisfy our clients. So that was what we did.One other factor influenced us, and that was computerised accounting. Just as people may be literate or numerate, we felt (and were also advised) that word processors did not perform so well on accounts, nor did general-purpose machines process text as efficiently as dedicated equipment. We therefore decided that we would accept incompatibility of hardware and use a separate * Our equipment is 'clever' enough to enable us to adjust the height of the accents -a recent development -and also generate our own character sets (on-screen) and keyboard layouts, for example, for Russian, Greek and various Eastern European languages. The limitation is set by the availability of suitable daisy wheels. specialised system to handle our accounts -but that is another story, and our most recent acquisition, again after much searching to find the best match to our needs.With dedicated word processing, a single stand-alone system can easily reach £10,000, and this level of investment means a serious commitment in longer-term finance, not to mention a sufficiently stable workload and turnover to make the business risk viable. I can easily understand why individual translators hesitate, and assure them that their fears are shared at the company level. It is a lot of money, and the paying doesn't stop there. Ribbons are more expensive, diskettes work out at about £5 each depending on what you need, but the most significant after-sales costs are insurance and maintenance. I concede, reluctantly, that a maintenance contract is essential, since it ensures a priority response in case of need, but I really begrudge paying 10-12 per cent of the total hardware price, every year, and no argument will convince me that it is reasonable -after all, our translation work is expected to be consistently 100 per cent reliable.It is in many ways a pity that in today's conference programme the users have their say after the manufacturers and suppliers. On the other hand, this seems to be a typical situation, where the seller dictates the buyer's range of choice. It would have been interesting for the suppliers to listen to the users' needs and then to say what is being done or what can be done to satisfy those specific needs. After all, we already know we are an elitist group of people, and what we need our equipment to do is very often far removed from the machines' original design capabilities.The next heading in my list, you will recall, was communications. More and more people have word processors, more and more people want to communicate. Some, like my own company, have modems which enable us to exchange data With those who speak our language -using the standard teletype code or an IBM protocol. Even then, there is a problem, because we probably transmit at different speeds and so our communication is mutually unintelligible; simply put, I can talk to you but you cannot understand me, therefore we do not communicate. At the international level, an added complication which we have encountered is differences in the software between countries, for the same make of WP. Each manufacturer has his own machine system, and this further complicates the problem. The nearest we appear to have got so far is little nodes of CP/M micros that can intercommunicate, other hardware that can be interfaced using IBM protocols or teletype code (which has limitations) and so on.I have already had a need to supply soft copy (text in diskette form) to clients with other hardware, in one case Wordplex to Philips, and in a second, more recent case it was Wordplex to Wang. The exercise was carried out by transmitting text over the telephone, using the common teletype code, direct into the other machine. However, the text needed reformatting and some concentrated screen editing to make it presentable, simply because some software instructions were not identical in the two processors. The first exercise was English, and was relatively successful -at least, our client was happy with the result; the second one was German, and we had to do all sorts of global substitutions to present a correctly typed copy, in addition to editing work. Frankly, it would have been so expensive to do properly that we abandoned all screen editing and just transmitted from one machine to the other, with the client's agreement. In cash terms, it would have been cheaper to keyboard, although much more time-consuming. I must say however, that as an OCR exercise to get it onto our own WP it was extremely successful -will Teletex solve my problem?I know translators and clients who have word processors and we are all -all -waiting, quivering in anticipation, for the magic system that will put us all in contact. I think it would be reasonable to say that my company is one of the pioneers in applying electronics to translation, and we have done our best to remain aware of progress, yet the way ahead still seems to us to be cluttered with incompatible alternatives.An extremely strong message emerges from a recent survey carried out by the Technology special interest group of the Translators' Guild, and that is that translators with word processors want to communicate, and many of those who are not yet committed give the desire for widespread communications compatibility as one of their major reasons for holding back. The other obvious related factor is the high cost forecast for such a capability -even at today's postal rates, £2,000 buys a lot of postage stamps. I sincerely hope that, even if nothing else results from this conference, this message goes home to all those who are in a position to influence matters, and that they act on it.We are told that we can have access to Eurodicautom; we can have Prestel, Viewdata, Teletext with a 't', and so on, with the right communications capability. I already have one modem and software, why should I have to have half a dozen different and expensive ways of doing the same basic task? OCR I have perhaps spent rather too much time looking at equipment, but one needs to have the tools to use them in any form of system. There is one item which I have so far not mentioned and that is Optical Character Readers -or Recognition -OCR for short. An OCR can be connected to a word processor and, in very simple terms, it scans a page of typescript, identifies each letter and transmits a code for that letter to the word processor. To put a full A4 page on a WP takes well under a minute. It sounds marvellous, and it is, but it has limitations: the OCR will only read a limited number of typewriter faces, and may even balk at an equivalent typeface; it will only read portrait, not landscape; if it cannot read a letter, it transmits a block symbol, which is helpful, but it may also think it reads a character correctly and be wrong. Experience helps in remembering the weak points of any particular face. There are also difficulties in reading accented letters. The equipment is probably only of marginal interest to translation departments; individual translators are only likely to meet up with OCR by being asked to provide typescript suitable for input. We use ours primarily as an additional typist, but with variations which I hope to illustrate shortly.'Systems' sounds so technical and complex; here it really only means organised work routes and methods, but using electronics.My first and dominant piece of advice in this context is: take the brakes off your imaginations and let them run free; throw away the blinkers, and dream a little. People have tended to laugh at 'think tanks', but the idea does work, provided you don't produce a spontaneous idea from the depths of your mind only to let your conscious mind reject it as being impossible by normal conventions. We've found this once or twice, and made the idea work eventually, by saying 'I want to do this: how can I achieve it?' and going on from there.Let me quickly list the relevant bits of Able's equipment, before illustrating some of our routines:-IBM golfball + telex cutter -telex transceiver -facsimile (CCITT Groups 1, 2)* * Now Groups 2/3 but the application is unchanged.-OCR -2 independent word processors (2 keyboards, 2 VDUs, 2 printers) -communications -modem -accounts micro + hard disk, other typewriters, photocopiers, etc.The two processors have dual-ground capability (background, foreground), meaning that one operator can control a continuing background function whilst simultaneously carrying out another operation in foreground -effectively, doubling each machine's capabilities. One WP is hard-wired to the OCR, the other to the transceiver modem, which has a direct external telephone line avoiding the switchboard.A hypothetical job situation could arise where all this is needed. We may receive a telex enquiry, which is answered by preparing a tape on the telex cutter and transmitting it back; the text comes in by facsimile and is translated internally, draft typed in a machine-readable typeface. The draft is then fed into the WP, edited for layout and misreads, plus linguistic check (of course), then transmitted back to client via the modem and the normal telephone line. That's only a contrived illustration, but different combinations from it are part of our daily routine.We are also able to generate our own character sets on the WP screen, and position the characters where we want them on the keyboard. This means that, apart from the obvious exceptions like Chinese, Japanese and so on, and, for the time being at least, languages like Arabic which read from right to left, in theory our language capability is only limited by the availability of suitable daisy wheels.One aspect of British Telecom's Intelpost -facsimileservice which we have recently discovered, is that we as subscribers with an Intelpost contract can transmit from our machine to an Intelpost office near a client or translator, and they can use the link in reverse, back to us. It also works to and from other parts of Europe. Provided the recipient is prepared to accept the loss of quality for the sake of speed, it is beneficial.Thinking of speed, with so many means of communication available, it becomes more important to select the most efficient method for the task in hand. Courier service Rail/Air Communicating WP I recall, a couple of years ago, seeing a promotional photograph showing a telex for translation that was around 6 metres long. Of course, it depends where it came from, and other factors, but consider the length of time needed to prepare, proof, correct and transmit that amount of text, let alone the cost of that work.My office is about 40 miles, 65 kilometres, from London. A client insisted on sending us twenty pages by facsimile; it took about one and a half hours, and cost around £20. A motorcycle messenger, in that instance, would have given us a better copy, for less cost, more quickly. Is it cheaper or quicker to send twenty pages by Intelpost to Exeter, or is British Rail's Red Star cheaper and just as quick? It is not efficient to give the automatic response. You need to pause long enough to consider the relative costs and advantages of the different options (mind you, once communicating word processors are linked, this will all be academic instead of epidemic).Coming back to word processors, one of their greatest advantages is the ease of changing your text, and playing with the shape on-screen until it is correct. In the 'bad old days', to produce camera copy, we used to read a translator's draft, edit it and mark it for layout; only when we were 100 per cent happy was it given for typing, and heaven help the editor if something had been overlooked and needed correction. Remember, too, the difference in quality between camera copy and headed paper texts -camera copy can have patches or obliterating fluid, provided that it photographs clean, but headed paper work must be perfect. The average page contains around 1,500 key strokes, all of which are potential errors -particularly when retyping and it's the last sheet of client's notepaper! The word processor has changed all that, for us. Now, the draft goes straight to the WP operator, and all the linguistic editing is combined with proofreading and layout approval into a single operation. That saves time and anguish. Only after the final corrections have been approved do we touch the headed stationery.I said earlier that the norm for freelance translators was the electric typewriter -or better. The OCR helps us benefit from that, since a fair proportion of the translators I know can or naturally do produce their work using a machine-readable typeface. One hears tales of lengthy, complicated instructions to translators on how to prepare texts for machine input -simpler, almost, to type the camera copy. Our philosophy is that the OCR is an internal benefit, but not at the translators' expense. Our only brief is to ask for one of a choice of six common type styles. We have been forced to set this principle aside on two occasions, for German, but in both cases it was simply to adopt the convention of 'e' to indicate umlauts, which were then reinstated using global substitutions. It was the logical course of action and was with the active -not reluctanthelp of the translators concerned.Geoffrey Samuelsson-Brown is going to be talking in detail about glossaries on word processors. Those who heard me speak at the Translators' Guild Forum in June will already know that I have devised a word processor-based glossary: P olyglot B i-directional A lphabetical R eversible entry B y user and field coded E lectronic R evisable W ord P rocessor B ased GLOSSARY which uses a system of languages, subject and user codes to extract specific vocabularies from a polyglot alphabetical list. With, I hope, the kind permission of the editor, and since it is both relevant and not in print elsewhere, I have included the text of that talk at the end of this paper. The system goes further than that simplified description, because the language and subject codes are used to help in translator selection; they are also used in the accounts computer and for subsequent statistical analysis. The user code also serves as the client account code, and each translator on our records has a personal reference number. Theoretically, if I wish, I can analyse our periodic workload in terms of words per language per client or per translator; I can analyse our workflow in a given set of combinations to see whether it is regular and high enough perhaps to recruit an additional full-time staff translator to handle it. The whole point of this is that a single unfettered idea can be made to serve many useful purposes, creating a truly integrated system. My next obvious stage will be to look at incorporating order processing into the same system. This is about as far as we can go, since I believe the rest of the job -the translation itself -is best done by humans.The ability of word processors to reprint text can be put to good use with even such simple spare-minute tasks as running off address labels for regular contacts, so that they are already accurately done when you have just two minutes to get an envelope in the post.Another simple help is with your own personal list of telephone numbers. We used to have a thumb-indexed list of regular numbers -that is, all the A's together, all the B's and so on, but not in alphabetical order within each letter. We then had to scan two or three pages to find the name. Life is much simpler now it is on word processor, in strict ABC order, and updated on demand.This is what I meant by letting your imagination run free. Some ideas are big and complex, but others are so simple and obvious that they get missed.I would like to end with one more illustration on the use of electronics, followed by an open question which I hope will merit wider debate. Various methods of calculating and pricing texts are used around Europe, and all have their merits and disadvantages which I assume are well known. Their main common disadvantage is that they are all very time-consuming. I reckon that in my company we waste well over 2,000 man-hours a year -more than one personyear -on that task, and the results are still only approximate, conventional, and open to debate.Word processors are capable of producing a tally of the length of text automatically -ours produce a character count, others count words, but whether it is bits, bytes or nibbles, it does establish a finite electronic length of text.Let me go a stage further. Not every text which comes into our office needs to go on word processor, but it takes a fraction of the normal counting time for us to put the typescript through the OCR and establish a WP character count. The joy of this is that it will still happen if the typeface is not machine-readable (experimentally, we have even achieved consistent results with Arabic, for example; not a true count, admittedly, but sufficiently accurate for the purpose).Now that the use of word processors is becoming more widespread, the time seems ripe to consider adopting a common counting method, and this would appear to be an acceptable solution. Yes, there are still problems to be solved, and yes, there would be a need to convince and educate at all levels. When I think of all the wasted productive time, I certainly feel it is worth serious consideration by all those who are affected or afflicted by Word counts.May I leave you with one last thought to bear in mind When you are considering what to buy: D THINK AHEA APPENDIX Glossary on word processor* How many times have you knocked over a box of index cards and spent the next hour or more cursing them and everything else under the sun for getting in the way? That's the first question.How many times have you said to yourself: 'I've seen that term before' and wished that your immaculate filing system was also an efficient retrieval system? That's the second question.Years ago, in my youth, I was in the happy position of working in a government department which had a card index system of the sort that terminologists dream about. It stood so high that the top drawers canted down for access, and it was around forty feet from end to end. I imagine that by now it is all on computer. In those days it was an idealist system, full of sources and references and cross-references and so on. The Institutional terminologists -or could I coin the term 'terminologophile'? -among us would have been in raptures. I said government department; like all such monsters, it had an army of contributors and must have cost a considerable sum to produce, over the years. After all, there were no pressures of time or cost-effectiveness. It was simply recognised as essential.When I first joined Able Translations, in 1972, I was full of enthusiasm for term banks of this sort, and perfectionist dictionaries where every term was actually proven by a text reference, in context, in both source and target languages -all terms then being totally interchangeable.I quickly learned about the economics of necessity.Let me throw a few thoughts at you:A conscientious translator will automatically make a note of recurrent terms to ensure consistent repetition. 2. Assuming an expectation of repeat business, those terms need to be identified in some way and kept for future use. 3. Problem terms and abbreviations -things which are perhaps of more general use, but which caused problems in solving -need to be recorded, and retrievable. 4. Now, multiply each of these mini-glossaries by the number of languages you work in.And now, multiply the total by the number of discrete subject fields into which your work may fall. * Delivered at a forum of the Translators' Guild, London, June 1983. 6 . Finally, multiply by your number of clients -not just the active ones. Don't forget that a client source may send you work from several of their clients (translation companies, advertising agencies, translation departments of companies, and so on).The simple answer to this complex problem is... utter confusion.Let me give you a little more history or perhaps a skeleton from our cupboard: Able's reference library contains several thousand terms which have been painstakingly gathered over the years. Little bundles of cards with a rubber band round them, pages of typescript notes, pages of manuscribble, and so on. For years we had realised that they were useless as they were and -that famous promise -one day we would merge them. The problem was the method.After a year of research and enquiry, demonstrations and quotes, in 1981 we bought two Wordplex WP systems. The word processors gave us an immediate ability to file, alphabetise and search for terms, so the essential problem was solved, but we still had to sort out exactly what we wanted to achieve.First a riddle: why is it that whenever you are hunting for a term, if you are going to find it at all, it is in the last place you look? Because, when you find it, you stop looking...A shelf-full of dictionaries is typical of this. There is nothing to be done at that level... yet. But glossaries, term banks, collections of words which you make yourself are a different question, because the decisions are yours. For good or bad, this is what we have done.Our problem 1. As a company, we handle many language combinations. 2. We work in a wide range of subjects. 3. We need to isolate specific customer-preferred terms. 4. We wanted to minimise search time and locations. Our solution 1. A single word processor-based glossary. 2. Capable of identifying language combination by tag system. 3. Multilingual (Latin alphabet). 4. One alphabetic sort, regardless of language. 5. Bi-directional: where terms are direct equivalents in source and target language, they are reversed electronically and entered both ways.User or usage coded by tag system.Since developing the original idea, we have added a fifth column, for which I am grateful to Barbara Snell. This enables us -if we wish -to add a two-letter tag taken from our translation subject code, to mean that a term, which may otherwise have several equivalents, only has this particular equivalent in a specific subject context.How did we arrive at this solution?|1.Having a single reference source avoids not only oddments of subject or language, but duplication of terms. It also saves search time in that you only look once.As many of my freelance colleagues have discovered, we are putting our translator files on diskette, and will search these by a series of codes we have devised. It made sense to use a common set of language codes for both glossary and translators.How often have you failed to find a term and resorted to looking at dictionaries in a related language? In one small area of our glossary I would expect to find, for example, all the Latin-based equivalents of a given term.Avoiding essential classification by subject has avoided any limitation of the field in which a term may be found, for example, types of nuts, bolts and screws. On the other hand, the terms can be tagged for a particular user as a preferred term. They can also be tagged for a specific field of use.Turning term pairs round, with care, has meant that we not only have the translation in the opposite direction, giving the translator a genuine term, but occasionally we can match pairs so that we have a source term in both languages.With the various tags, we can extract a specific language combination or client's vocabulary and print out just that part of our word bank, as reference for a translation task, or for any other reason. 7.Abbreviations and capital letters have caused a significant problem. Remember that each letter, whether capital or lower case, has a specific value assigned to it in the computer program. This is something which the user cannot control. When we started alphabetising, we found that all capitals -AA to ZZ -came before lower case aa to zz. This meant that German could not mix with other languages.We solved the problem, for us, by starting every entry with an initial letter.So far, since upper or lower case can be of vital significance (e.g. MW, mW), we have had to accept the shortcoming inherent in the system, and have two alphasorts.I'm open to advice or suggestions. I think the principal ingredients for something like this are imagination and an unwillingness to accept that something cannot be done, coupled with a helpful equipment supplier contact. I am convinced that we have nowhere near exhausted either our own inventiveness or the equipment's capabilities.
null
null
null
null
Main paper: : Ten years ago, few translators had electric typewriters, even fewer used carbon film ribbons. The industry standard was the manual machine, draft typing was commonplace and even manuscript was accepted. Like the aeroplane and the motor car, typing has developed extremely rapidly in a short time. It is hardly surprising that the users are bewildered by the choice of similar-but-different machinesnot only the wide range of manufacturers, each with his range of models, but even the different types of equipment which are available.My intention here is not to offer comparisons, nor really to offer specific advice. What I hope to do is to examine the various sorts of machines which can be of benefit to the translator, and end with some of the reasons which guided my own company's choice, as an illustration of our requirements and how we sought to satisfy them.Let us start with a basic assumption that you are a translator and are in the market for some electronic equipment. At this stage, it does not matter whether you are an individual, a translation company or a translation department within a large company. The distinction comes later in the process, when you consider scale, and type of equipment, spending power and return on investment.We have already come to the first problem. As a broad statement, equipment manufacturers were not -until relat-Tools for the Trade, V. Lawson (ed.) . © Aslib and Peter Barber, ively recently -aware of the particular needs of the translation world. Many, I am sure, felt it safer not to dabble in those waters.An alternative to doing your own evaluation is to seek the advice of the experts. Another set of problems, in that the expert will probably also suffer from lack of awareness of the translators' specific problems; he will learn (no doubt, at your expense) and take his questions back to manufacturers...who are also not aware. Furthermore, with the best will in the world, no consultant can be totally unbiased, nor can he have the depth and width of knowledge without extensive research.It may be of interest to know that the British government once felt the need to offer grants to companies to pay for an evaluation of computerisation within their organisation. We applied, having been quoted £2,000 for a study to be carried out, and were refused on the grounds that we were not a manufacturing company (that scheme was called MAPCON). It is a significant coincidence that the grant limit was also £2,000.After that, and several other experiences, we decided to use our own pooled ability within the company to decide first, what we wanted, and second, which equipment would do the job with the smallest amount of compromise.The second part of this conference is on machine translation. I am interested, but not with a view to investing in the foreseeable future. We have looked at the cost of equipment, results and cost of post-editing, taking as guidance information gained from a previous Aslib and Translators' Guild conference. We shelved the idea, because we could not guarantee sufficient throughput per language, per subject, per year, because the costs of input, MT and post-editing exceeded our normal costs, and because we did not want to afford the cost of acquisition. We even considered the possibility of time-sharing with other companies, but realised that we would all suffer from the need to establish priorities -which is difficult enough to achieve even when everything is under one's own control.What does that leave us? 1. Telephone answering machines 2. Telex 3. Facsimile 4. Word processors (WP) and electronic typewriters 5. Communications 6. Optical Character Recognition (OCR).Some of the applications of this equipment have been formalised with jargon descriptors or system names, such as local area networks and Teletex, and may be of interest, but mainly to organisations.Our company works normal business hours, five days a week; we close on English Bank Holidays, and, as policy, keep open with a skeleton staff between Christmas and New Year. There is nearly always someone in the office at least half an hour before and after normal hours. Outside that, if it really cannot wait until the next day, I am in the Translators' Guild Index and can usually be contacted; it is nearly always translators who ring me at home, and I prefer to talk and sort out the problem rather than to cause delay, doubt or even error for lack of contact. With that availability, which covers most needs, we have never felt that a telephone answering machine was an essential piece of equipment for us; the same or similar arguments would apply to most organisations. However, I can see that an answering machine would be of great benefit to an individual translator, who may miss out on a large job simply because he couldn't answer the telephone when it rang. There are certainly others better able than I to advise or comment on answering machines, and I know that very informative and helpful surveys of what the market offers have been published. I also know a fair number of translators who have them -I remember one who used to leave his machine on permanently, and check it for calls every half an hour or so; he would then sort calls into priority order depending on the messages and ring those he wanted to talk to, but only when he was ready. It increased his telephone bill, but meant he was only interrupted when he wanted to be, not in the middle of something urgent, difficult, or important. I derive a certain amusement from listening to the messages people leave for callers but sometimes wish, when I am trying to place an urgent translation, that I could have some idea of when 'as soon as possible' might be. Having said that, of course, I can see you don't want to tell a potential burglar that you will be away until next Tuesday.The telex machine has been around for years, and ours still gets a lot of use, even if we do get the occasional request to translate something into Greek or Arabic and send it back by telex. Having decided that there was a sufficient need for us to keep one, as part of our 'global' philosophy, we looked at equipment.With the old-style telex machines, there were two major problems: no matter how urgent your message, an incoming call always interrupted your tape preparation, usually spoiling the tape; there was a limited and inefficient method of preparation, using the slow and specialised telex keyboard.There were five solutions open to us:1. put up with the situation (cheap, but did not solve the problem);2. acquire another telex terminal (this would have been possible but it was still a one-purpose slow unit);3. acquire an electronic system (this was efficient but required special operator training, and cost significantly more); 4. link telex preparation into the word processor (again, viable, but it meant that most outgoing telexes had to be prepared by the WP operators -specialist training, conflict of priorities, and the telex bottleneck would then also affect typing load);5. telex cutter. This is what we chose. It is an electronic box which is connected to a normal golfball typewriter and produces a telex tape.For our purposes, it had four advantages: (a) it was relatively cheap to buy outright (about £1,200 installed);(b) the typewriter could still be used as an ordinary typewriter, even to the extent of producing a telex tape with a proper confirmation typescript, simultaneously;(c) any typist could use it, including the two-finger variety;(d) it prepared standard telex tape at typing speeds, with a buffer memory to allow sensed error correcting.I accept that our solution is not necessarily everyone's answer, and it may be that someone can fault our arguments sufficiently to show that we made a wrong decision. Fair enough, in that case we shall have served as an example whereby others do not make the same mistake -isn't that what learning and sharing knowledge is all about?Despite all the modern electronics, telex still serves a useful function. We find telex useful for quick communication between ourselves and our customers, and also for translations up to a maximum of about 1,000 words or 150 lines. Most telex messages are not longer than 300 words or 40 lines. The important drawback in translation by telex is that you cannot transmit accents except by using conventions which get the message across but are tedious to follow; ideally, both communicators need to know the language, or the circumstances must be such that any accents can genuinely be ignored.Incidentally, we find it of great benefit to leave the tape punch on permanently. This enables us not only to retransmit the telex but also to rerun it if, for example, the paper jams in the machine. I have one customer who has taken this a step further: at our suggestion, his telex operators have a standing instruction that, if the message is 'in foreign', they automatically retransmit it to us; the first time the addressee sees it is when our translation appears on his desk attached to the original.I am still surprised at finding resistance to telex -I have one customer who regularly makes a 15-mile round trip by car with incoming messages which his operators refuse to keyboard and which cannot wait for the post. What he needs is facsimile.Fortunately for users, there is an international specification for these machines (otherwise known as telecopiers or telefax machines), which means that even if they have their own specific operating mode, they can communicate with any other machine of the same CCITT group. The old original telecopier is now labelled as a CCITT Group 1 machine; most users that I know have Group 2 transceivers, which give a transmission time of 3, 4 or 6 minutes for an A4 page. They are operable manually or with optional automatic reception. Group 3 machines are the latest standardised units on the market, intended for the high-volume user, and have transmission times per page measured in seconds (60 or less) rather than minutes.Most, if not all, Group 2 machines have the ability to talk to Group 1 and a large number of Group 3 machines can communicate with their lesser brethren in Group 2.Facsimile has certain significant advantages over telex, but without being a total substitute. It is possible to transmit diagrams and original documents without needing to transcribe them, so the overall process is quicker, guarantees fidelity to the original and in the long term probably works out cheaper than telex. In this respect, facsimile transceivers are better than communicating word processors. It is also possible, of course, to transmit any language, whether it uses ideograms or non-Latin scriptprovided that the characters are large enough and clear enough to be legible after transmission (remembering that noise on the telephone line will appear as black dots and streaks). Almost certainly, the major disadvantage is in the comparatively small number of users; as a routine with new clients, I ask if they have facsimile, and am surprised at how many ask 'what's that?'. Yet the industry is already working towards a Group 4 standard.*Without a doubt, the most significant piece of equipment for any translator is the means of presenting his work. As I said at the beginning, ten years ago the norm was a manual typewriter with fabric ribbon, and only a handful of translators offered work on electric machines with carbon film ribbon.Now, the handful have screen-based dedicated word processors, another handful or so have microcomputers with a WP capability, a further handful have electronic typewriters, and the predominant method is now, it would appear, the electric typewriter. The 'trusty (rusty?) manual' is now very definitely down-market in terms of quality of appearance of work and ease of manipulation of text; not to mention typist's fingers as the translator's equivalent of barmaid's elbow.So, where to aim, which type of equipment do we look for? Obviously, finance will play a large part in anybody's decision, and we are now fast approaching the point in this paper where the individual has a lesser need than his corporate counterpart; a translation company or department may arrive at the same decision as the individual translator, but want more units.The range of machines is so vast and varied that I am not even going to attempt to discuss them in detail. * Since this paper was prepared, our company has upgraded to Group 3 facsimile with Group 2 talkdown. Interesting to note that at Group 3 speed it is quicker, more reliable and cheaper to transmit a one-page document by facsimile than by post. The quality of reception is also good enough to retransmit to a Group 2 machine with acceptable legibility. The choice between manual, electric and electronic typewriters, with fixed basket typeface, golfball, thimble or daisy wheel, compared with microprocessor or dedicated word processor, is the first level of choice of any potential purchaser; he then has to examine all the offerings of all the manufacturers of equipment of the type he has decided to buy, and weigh the various operating benefits of each Against a cash budget. The nearest comparison I can offer is buying a car, where you first spend ages reading the brochures, then walk the showrooms, perhaps with a test drive or two, followed by price-hunting, comparison of specifications to see which is best value; do you want 1.1, 1.3, 1.6, 2 or 2.3 litre, in the Basic, L, GL or Ghia trim? After more decision-making on the choice of colours, off you go to the showroom for the last time and -'I'll have that one, because it's a nice colour and I can have it tomorrow'. Fortunately, as yet, a choice of colour does not appear to be a major consideration in electronic office equipment. When we as a company wanted to update our equipment and first started looking four years ago, I admit we were naive on the subject. For the most part, we had to use our own judgement, since WP and translation were new and unaccustomed bedfellows, and -I must be honest -a lot of manufacturers not only could not give us answers, but also had never even thought of some of the questions. As simple and quite recent examples, we had to buy Greek daisy wheels from a firm in Honolulu because that was the only place we could find them; Portuguese also proved a problem, because the accents on a normal daisy wheel are not high enough to be used on the upper case letters, even if you can superimpose them.* Looking back, I am amazed that our path into word processing was so smooth. I can remember, before we had full accent capability, that we used to have to add certain accents using one of our obsolescent golfball machines.Let me, very briefly, give some of the thoughts which led us to dedicated word processors:1. Electronic machines were a step up from electric, but the mini-memory available then was virtually worthless, so the cash difference really went on the buffer memory for sensed error and on the improved presentation and layout capabilities.2. For a bit more, you could have a magnetic card memory which became infinite, but still without any VDU.3. Micros -at that time -offered word processing in English and were still hesitantly mastering that level. A bit nearer to the ideal, but the thought of multiple conventional key operations also dissuaded us. 4. So it had to be a minicomputer at least.There remained only one other major decision and that was to compare stand-alone and central processor, bearing the future in mind.A central processor with individual terminals gave flexibility and the possibility of future expansion.On the other hand, if the CPU or printer failed, we would be stuck.However, applying our now habitual policy of redundancy (i.e. duplication of capability), two totally independent stand-alones would better meet our needs, since the breakdown of one would still enable us to shuffle priorities on the second machine and satisfy our clients. So that was what we did.One other factor influenced us, and that was computerised accounting. Just as people may be literate or numerate, we felt (and were also advised) that word processors did not perform so well on accounts, nor did general-purpose machines process text as efficiently as dedicated equipment. We therefore decided that we would accept incompatibility of hardware and use a separate * Our equipment is 'clever' enough to enable us to adjust the height of the accents -a recent development -and also generate our own character sets (on-screen) and keyboard layouts, for example, for Russian, Greek and various Eastern European languages. The limitation is set by the availability of suitable daisy wheels. specialised system to handle our accounts -but that is another story, and our most recent acquisition, again after much searching to find the best match to our needs.With dedicated word processing, a single stand-alone system can easily reach £10,000, and this level of investment means a serious commitment in longer-term finance, not to mention a sufficiently stable workload and turnover to make the business risk viable. I can easily understand why individual translators hesitate, and assure them that their fears are shared at the company level. It is a lot of money, and the paying doesn't stop there. Ribbons are more expensive, diskettes work out at about £5 each depending on what you need, but the most significant after-sales costs are insurance and maintenance. I concede, reluctantly, that a maintenance contract is essential, since it ensures a priority response in case of need, but I really begrudge paying 10-12 per cent of the total hardware price, every year, and no argument will convince me that it is reasonable -after all, our translation work is expected to be consistently 100 per cent reliable.It is in many ways a pity that in today's conference programme the users have their say after the manufacturers and suppliers. On the other hand, this seems to be a typical situation, where the seller dictates the buyer's range of choice. It would have been interesting for the suppliers to listen to the users' needs and then to say what is being done or what can be done to satisfy those specific needs. After all, we already know we are an elitist group of people, and what we need our equipment to do is very often far removed from the machines' original design capabilities.The next heading in my list, you will recall, was communications. More and more people have word processors, more and more people want to communicate. Some, like my own company, have modems which enable us to exchange data With those who speak our language -using the standard teletype code or an IBM protocol. Even then, there is a problem, because we probably transmit at different speeds and so our communication is mutually unintelligible; simply put, I can talk to you but you cannot understand me, therefore we do not communicate. At the international level, an added complication which we have encountered is differences in the software between countries, for the same make of WP. Each manufacturer has his own machine system, and this further complicates the problem. The nearest we appear to have got so far is little nodes of CP/M micros that can intercommunicate, other hardware that can be interfaced using IBM protocols or teletype code (which has limitations) and so on.I have already had a need to supply soft copy (text in diskette form) to clients with other hardware, in one case Wordplex to Philips, and in a second, more recent case it was Wordplex to Wang. The exercise was carried out by transmitting text over the telephone, using the common teletype code, direct into the other machine. However, the text needed reformatting and some concentrated screen editing to make it presentable, simply because some software instructions were not identical in the two processors. The first exercise was English, and was relatively successful -at least, our client was happy with the result; the second one was German, and we had to do all sorts of global substitutions to present a correctly typed copy, in addition to editing work. Frankly, it would have been so expensive to do properly that we abandoned all screen editing and just transmitted from one machine to the other, with the client's agreement. In cash terms, it would have been cheaper to keyboard, although much more time-consuming. I must say however, that as an OCR exercise to get it onto our own WP it was extremely successful -will Teletex solve my problem?I know translators and clients who have word processors and we are all -all -waiting, quivering in anticipation, for the magic system that will put us all in contact. I think it would be reasonable to say that my company is one of the pioneers in applying electronics to translation, and we have done our best to remain aware of progress, yet the way ahead still seems to us to be cluttered with incompatible alternatives.An extremely strong message emerges from a recent survey carried out by the Technology special interest group of the Translators' Guild, and that is that translators with word processors want to communicate, and many of those who are not yet committed give the desire for widespread communications compatibility as one of their major reasons for holding back. The other obvious related factor is the high cost forecast for such a capability -even at today's postal rates, £2,000 buys a lot of postage stamps. I sincerely hope that, even if nothing else results from this conference, this message goes home to all those who are in a position to influence matters, and that they act on it.We are told that we can have access to Eurodicautom; we can have Prestel, Viewdata, Teletext with a 't', and so on, with the right communications capability. I already have one modem and software, why should I have to have half a dozen different and expensive ways of doing the same basic task? OCR I have perhaps spent rather too much time looking at equipment, but one needs to have the tools to use them in any form of system. There is one item which I have so far not mentioned and that is Optical Character Readers -or Recognition -OCR for short. An OCR can be connected to a word processor and, in very simple terms, it scans a page of typescript, identifies each letter and transmits a code for that letter to the word processor. To put a full A4 page on a WP takes well under a minute. It sounds marvellous, and it is, but it has limitations: the OCR will only read a limited number of typewriter faces, and may even balk at an equivalent typeface; it will only read portrait, not landscape; if it cannot read a letter, it transmits a block symbol, which is helpful, but it may also think it reads a character correctly and be wrong. Experience helps in remembering the weak points of any particular face. There are also difficulties in reading accented letters. The equipment is probably only of marginal interest to translation departments; individual translators are only likely to meet up with OCR by being asked to provide typescript suitable for input. We use ours primarily as an additional typist, but with variations which I hope to illustrate shortly.'Systems' sounds so technical and complex; here it really only means organised work routes and methods, but using electronics.My first and dominant piece of advice in this context is: take the brakes off your imaginations and let them run free; throw away the blinkers, and dream a little. People have tended to laugh at 'think tanks', but the idea does work, provided you don't produce a spontaneous idea from the depths of your mind only to let your conscious mind reject it as being impossible by normal conventions. We've found this once or twice, and made the idea work eventually, by saying 'I want to do this: how can I achieve it?' and going on from there.Let me quickly list the relevant bits of Able's equipment, before illustrating some of our routines:-IBM golfball + telex cutter -telex transceiver -facsimile (CCITT Groups 1, 2)* * Now Groups 2/3 but the application is unchanged.-OCR -2 independent word processors (2 keyboards, 2 VDUs, 2 printers) -communications -modem -accounts micro + hard disk, other typewriters, photocopiers, etc.The two processors have dual-ground capability (background, foreground), meaning that one operator can control a continuing background function whilst simultaneously carrying out another operation in foreground -effectively, doubling each machine's capabilities. One WP is hard-wired to the OCR, the other to the transceiver modem, which has a direct external telephone line avoiding the switchboard.A hypothetical job situation could arise where all this is needed. We may receive a telex enquiry, which is answered by preparing a tape on the telex cutter and transmitting it back; the text comes in by facsimile and is translated internally, draft typed in a machine-readable typeface. The draft is then fed into the WP, edited for layout and misreads, plus linguistic check (of course), then transmitted back to client via the modem and the normal telephone line. That's only a contrived illustration, but different combinations from it are part of our daily routine.We are also able to generate our own character sets on the WP screen, and position the characters where we want them on the keyboard. This means that, apart from the obvious exceptions like Chinese, Japanese and so on, and, for the time being at least, languages like Arabic which read from right to left, in theory our language capability is only limited by the availability of suitable daisy wheels.One aspect of British Telecom's Intelpost -facsimileservice which we have recently discovered, is that we as subscribers with an Intelpost contract can transmit from our machine to an Intelpost office near a client or translator, and they can use the link in reverse, back to us. It also works to and from other parts of Europe. Provided the recipient is prepared to accept the loss of quality for the sake of speed, it is beneficial.Thinking of speed, with so many means of communication available, it becomes more important to select the most efficient method for the task in hand. Courier service Rail/Air Communicating WP I recall, a couple of years ago, seeing a promotional photograph showing a telex for translation that was around 6 metres long. Of course, it depends where it came from, and other factors, but consider the length of time needed to prepare, proof, correct and transmit that amount of text, let alone the cost of that work.My office is about 40 miles, 65 kilometres, from London. A client insisted on sending us twenty pages by facsimile; it took about one and a half hours, and cost around £20. A motorcycle messenger, in that instance, would have given us a better copy, for less cost, more quickly. Is it cheaper or quicker to send twenty pages by Intelpost to Exeter, or is British Rail's Red Star cheaper and just as quick? It is not efficient to give the automatic response. You need to pause long enough to consider the relative costs and advantages of the different options (mind you, once communicating word processors are linked, this will all be academic instead of epidemic).Coming back to word processors, one of their greatest advantages is the ease of changing your text, and playing with the shape on-screen until it is correct. In the 'bad old days', to produce camera copy, we used to read a translator's draft, edit it and mark it for layout; only when we were 100 per cent happy was it given for typing, and heaven help the editor if something had been overlooked and needed correction. Remember, too, the difference in quality between camera copy and headed paper texts -camera copy can have patches or obliterating fluid, provided that it photographs clean, but headed paper work must be perfect. The average page contains around 1,500 key strokes, all of which are potential errors -particularly when retyping and it's the last sheet of client's notepaper! The word processor has changed all that, for us. Now, the draft goes straight to the WP operator, and all the linguistic editing is combined with proofreading and layout approval into a single operation. That saves time and anguish. Only after the final corrections have been approved do we touch the headed stationery.I said earlier that the norm for freelance translators was the electric typewriter -or better. The OCR helps us benefit from that, since a fair proportion of the translators I know can or naturally do produce their work using a machine-readable typeface. One hears tales of lengthy, complicated instructions to translators on how to prepare texts for machine input -simpler, almost, to type the camera copy. Our philosophy is that the OCR is an internal benefit, but not at the translators' expense. Our only brief is to ask for one of a choice of six common type styles. We have been forced to set this principle aside on two occasions, for German, but in both cases it was simply to adopt the convention of 'e' to indicate umlauts, which were then reinstated using global substitutions. It was the logical course of action and was with the active -not reluctanthelp of the translators concerned.Geoffrey Samuelsson-Brown is going to be talking in detail about glossaries on word processors. Those who heard me speak at the Translators' Guild Forum in June will already know that I have devised a word processor-based glossary: P olyglot B i-directional A lphabetical R eversible entry B y user and field coded E lectronic R evisable W ord P rocessor B ased GLOSSARY which uses a system of languages, subject and user codes to extract specific vocabularies from a polyglot alphabetical list. With, I hope, the kind permission of the editor, and since it is both relevant and not in print elsewhere, I have included the text of that talk at the end of this paper. The system goes further than that simplified description, because the language and subject codes are used to help in translator selection; they are also used in the accounts computer and for subsequent statistical analysis. The user code also serves as the client account code, and each translator on our records has a personal reference number. Theoretically, if I wish, I can analyse our periodic workload in terms of words per language per client or per translator; I can analyse our workflow in a given set of combinations to see whether it is regular and high enough perhaps to recruit an additional full-time staff translator to handle it. The whole point of this is that a single unfettered idea can be made to serve many useful purposes, creating a truly integrated system. My next obvious stage will be to look at incorporating order processing into the same system. This is about as far as we can go, since I believe the rest of the job -the translation itself -is best done by humans.The ability of word processors to reprint text can be put to good use with even such simple spare-minute tasks as running off address labels for regular contacts, so that they are already accurately done when you have just two minutes to get an envelope in the post.Another simple help is with your own personal list of telephone numbers. We used to have a thumb-indexed list of regular numbers -that is, all the A's together, all the B's and so on, but not in alphabetical order within each letter. We then had to scan two or three pages to find the name. Life is much simpler now it is on word processor, in strict ABC order, and updated on demand.This is what I meant by letting your imagination run free. Some ideas are big and complex, but others are so simple and obvious that they get missed.I would like to end with one more illustration on the use of electronics, followed by an open question which I hope will merit wider debate. Various methods of calculating and pricing texts are used around Europe, and all have their merits and disadvantages which I assume are well known. Their main common disadvantage is that they are all very time-consuming. I reckon that in my company we waste well over 2,000 man-hours a year -more than one personyear -on that task, and the results are still only approximate, conventional, and open to debate.Word processors are capable of producing a tally of the length of text automatically -ours produce a character count, others count words, but whether it is bits, bytes or nibbles, it does establish a finite electronic length of text.Let me go a stage further. Not every text which comes into our office needs to go on word processor, but it takes a fraction of the normal counting time for us to put the typescript through the OCR and establish a WP character count. The joy of this is that it will still happen if the typeface is not machine-readable (experimentally, we have even achieved consistent results with Arabic, for example; not a true count, admittedly, but sufficiently accurate for the purpose).Now that the use of word processors is becoming more widespread, the time seems ripe to consider adopting a common counting method, and this would appear to be an acceptable solution. Yes, there are still problems to be solved, and yes, there would be a need to convince and educate at all levels. When I think of all the wasted productive time, I certainly feel it is worth serious consideration by all those who are affected or afflicted by Word counts.May I leave you with one last thought to bear in mind When you are considering what to buy: D THINK AHEA APPENDIX Glossary on word processor* How many times have you knocked over a box of index cards and spent the next hour or more cursing them and everything else under the sun for getting in the way? That's the first question.How many times have you said to yourself: 'I've seen that term before' and wished that your immaculate filing system was also an efficient retrieval system? That's the second question.Years ago, in my youth, I was in the happy position of working in a government department which had a card index system of the sort that terminologists dream about. It stood so high that the top drawers canted down for access, and it was around forty feet from end to end. I imagine that by now it is all on computer. In those days it was an idealist system, full of sources and references and cross-references and so on. The Institutional terminologists -or could I coin the term 'terminologophile'? -among us would have been in raptures. I said government department; like all such monsters, it had an army of contributors and must have cost a considerable sum to produce, over the years. After all, there were no pressures of time or cost-effectiveness. It was simply recognised as essential.When I first joined Able Translations, in 1972, I was full of enthusiasm for term banks of this sort, and perfectionist dictionaries where every term was actually proven by a text reference, in context, in both source and target languages -all terms then being totally interchangeable.I quickly learned about the economics of necessity.Let me throw a few thoughts at you:A conscientious translator will automatically make a note of recurrent terms to ensure consistent repetition. 2. Assuming an expectation of repeat business, those terms need to be identified in some way and kept for future use. 3. Problem terms and abbreviations -things which are perhaps of more general use, but which caused problems in solving -need to be recorded, and retrievable. 4. Now, multiply each of these mini-glossaries by the number of languages you work in.And now, multiply the total by the number of discrete subject fields into which your work may fall. * Delivered at a forum of the Translators' Guild, London, June 1983. 6 . Finally, multiply by your number of clients -not just the active ones. Don't forget that a client source may send you work from several of their clients (translation companies, advertising agencies, translation departments of companies, and so on).The simple answer to this complex problem is... utter confusion.Let me give you a little more history or perhaps a skeleton from our cupboard: Able's reference library contains several thousand terms which have been painstakingly gathered over the years. Little bundles of cards with a rubber band round them, pages of typescript notes, pages of manuscribble, and so on. For years we had realised that they were useless as they were and -that famous promise -one day we would merge them. The problem was the method.After a year of research and enquiry, demonstrations and quotes, in 1981 we bought two Wordplex WP systems. The word processors gave us an immediate ability to file, alphabetise and search for terms, so the essential problem was solved, but we still had to sort out exactly what we wanted to achieve.First a riddle: why is it that whenever you are hunting for a term, if you are going to find it at all, it is in the last place you look? Because, when you find it, you stop looking...A shelf-full of dictionaries is typical of this. There is nothing to be done at that level... yet. But glossaries, term banks, collections of words which you make yourself are a different question, because the decisions are yours. For good or bad, this is what we have done.Our problem 1. As a company, we handle many language combinations. 2. We work in a wide range of subjects. 3. We need to isolate specific customer-preferred terms. 4. We wanted to minimise search time and locations. Our solution 1. A single word processor-based glossary. 2. Capable of identifying language combination by tag system. 3. Multilingual (Latin alphabet). 4. One alphabetic sort, regardless of language. 5. Bi-directional: where terms are direct equivalents in source and target language, they are reversed electronically and entered both ways.User or usage coded by tag system.Since developing the original idea, we have added a fifth column, for which I am grateful to Barbara Snell. This enables us -if we wish -to add a two-letter tag taken from our translation subject code, to mean that a term, which may otherwise have several equivalents, only has this particular equivalent in a specific subject context.How did we arrive at this solution?|1.Having a single reference source avoids not only oddments of subject or language, but duplication of terms. It also saves search time in that you only look once.As many of my freelance colleagues have discovered, we are putting our translator files on diskette, and will search these by a series of codes we have devised. It made sense to use a common set of language codes for both glossary and translators.How often have you failed to find a term and resorted to looking at dictionaries in a related language? In one small area of our glossary I would expect to find, for example, all the Latin-based equivalents of a given term.Avoiding essential classification by subject has avoided any limitation of the field in which a term may be found, for example, types of nuts, bolts and screws. On the other hand, the terms can be tagged for a particular user as a preferred term. They can also be tagged for a specific field of use.Turning term pairs round, with care, has meant that we not only have the translation in the opposite direction, giving the translator a genuine term, but occasionally we can match pairs so that we have a source term in both languages.With the various tags, we can extract a specific language combination or client's vocabulary and print out just that part of our word bank, as reference for a translation task, or for any other reason. 7.Abbreviations and capital letters have caused a significant problem. Remember that each letter, whether capital or lower case, has a specific value assigned to it in the computer program. This is something which the user cannot control. When we started alphabetising, we found that all capitals -AA to ZZ -came before lower case aa to zz. This meant that German could not mix with other languages.We solved the problem, for us, by starting every entry with an initial letter.So far, since upper or lower case can be of vital significance (e.g. MW, mW), we have had to accept the shortcoming inherent in the system, and have two alphasorts.I'm open to advice or suggestions. I think the principal ingredients for something like this are imagination and an unwillingness to accept that something cannot be done, coupled with a helpful equipment supplier contact. I am convinced that we have nowhere near exhausted either our own inventiveness or the equipment's capabilities. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
495
0
null
null
null
null
null
null
null
null
c149a83bdea7c9c4af7c7a2e7401ab9398fb410c
43310334
null
Recent {C}anadian experience in machine translation
The experience to be discussed is that of the Translation Bureau of the Government of Canada, which provides translation services to all federal departments and agencies, from Parliament to the National Film Board. Canada is officially bilingual. Under the Official Languages Act, this means that all services offered to the public by the federal government must be provided in both official languages, and that all federal public servants have the right to work in English or French. The progress of official bilingualism in Canada has resulted in a steadily increasing work load for the Translation Bureau. The Bureau translated 73 million words in 1968-69; in 1981-82, it translated 276 million words, and now employs about 1800 persons.
{ "name": [ "Macklovitch, Elliott" ], "affiliation": [ null ] }
null
null
Proceedings of the International Conference on Methodology and Techniques of Machine Translation: Processing from words to language
1984-02-01
8
0
null
This enormous work load accounts for the Bureau's long-standing interest in machine and machine-aided translation. The Bureau began to subsidize research and development in MT more than ten years ago. It was the Translation Bureau that funded the development of the METEO system by TAUM, the University of Montreal research group in machine translation between 1974 and 1976; and the Bureau has been responsible for the operation of METEO since then. METEO translates weather bulletins from English to French at a rate of over 11,000 words a day, 365 days a year, and is recognized worldwide as one of the major successes, in the field of machine translation. (For a detailed description of METEO, see Chevalier et al, 1978.) In 1976, the Translation Bureau commissioned TAUM to develop the prototype of a new system to translate the maintenance manuals of the CP-140, a coastal patrol aircraft that the department of National Defence had just ordered and was to receive in 1980. The technical documentation for that aircraft, according to unofficial sources amounted to some 90 million words. For anyone who has never seen maintenance manuals for equipment as sophisticated as an aircraft, it should be stressed that only highly specialized translators can translate them; indeed, the educated layman can barely understand them. It was estimated in 1976 that it would take four specialists ten years to complete the French translation of the technical documentation for the CP-140, by which time the aircraft would be obsolete. It was for this reason that the Translation Bureau looked to machine translation for help. TAUM's initial estimate was that it would take about three years to develop the prototype of this new system, but this turned out to be overly optimistic. New high-level programming languages had to be designed and tested. Organizational problems arose in co-ordinating the activities of one team in Montreal and another in Ottawa. To complicate matters, TAUM lost its director early on in the project. Additional personnel had to be recruited and a great deal of time invested in training. Only then could the task of writing the extensive grammars and dictionaries required for the complicated sublanguage of aircraft maintenance manuals be undertaken in earnest. With the approval of the Translation Bureau, TAUM decided to base the construction of the new prototype on a 70,000 word corpus constituted of extracts from hydraulics manuals of several different aircraft.In March 1979, TAUM gave a public demonstration of the new system, baptised TAUM-AVIATION. The demonstration itself was a success; under television lights, it was shown that AVIATION could translate the hydraulics manuals of the CP-140. However, it had become obvious that without large additional investments, AVIATION would not be able to translate the other manuals of the CP-140 and in any event, not in time for the expected delivery of the plane in 1980. Several months later, the federal Treasury Board approved a new contract between the Translation Bureau and the University of Montreal, on the condition that TAUM-AVIATION be subjected to independent evaluation and that a feasibility study be conducted on the system's extendibility.The evaluation of TAUM-AVIATION (cf Gervais 1980) was conducted in March 1980 and had two main objectives: 1) to assess the system's linguistic performance, and 2) to analyse its development and operating costs. Samples taken from the hydraulics manuals of the CP-140, the Lockheed 1011 and a tank recovery vehicle were submitted to TAUM-AVIATION for translation and then revised by specialists from the Translation Bureau and an outside translation firm. The revisors were also asked to rate the intelligibility, fidelity and style of each sentence translated by the system, following a procedure used for an evaluation of the CEC's version of SYSTRAN in 1977. The same three texts were translated by qualified technical translators, two from the Bureau and one an outside freelancer, and these translations too were revised and rated. This allowed the evaluator to compare the ratings assigned to the machine translation and to the three human versions. As it turned out, the translations produced by TAUM-AVIATION received a satisfactory overall rating, about 80 percent of the rating assigned to the human versions. However, the system did not produce any translation for about one third of the sentences, titles or table entries that made up the test corpus.At first glance, the percentage of units for which TAUM-AVIATION produced no translation may appear extremely high. The principal evaluator, however, did not find this alarming:"Il faut se rappeler cependant que si le système TAUM-AVIATION ne produit aucune sortie dans certains cas, cela ne signifie pas qu'il en est incapable. Cela découle plutôt d'une décision des concepteurs du système. Ceux-ci ont jugé qu'il valait mieux ne rien produire plutôt que de risquer de produire du texte incompréhensible. Il est fort possible que dans un contexte d'exploitation il s'avère préférable de procéder autrement." p59The risk of incomprehensible output is minimized in a second generation system like TAUM-AVIATION by basing the translation phase on a complete analysis of each source language input unit. Whenever a unit did not receive a complete analysis in TAUM-AVIATION, no translation was produced. As the evaluator points out, this is a perfectly reasonable strategy ... for a system under development. What happened in the evaluation was that many of the errors that prevented units from being analysed were caused by incomplete or incorrectly stated selectional restrictions in the analysis dictionary. In an operational context, it would not have been difficult to modify the system so that this sort of minor, local error did not always block the translation of an entire unit.The proportion of untranslated units did prove to be significant, however, when it came to establishing the direct operating costs of producing translations using TAUM-AVIATION. Direct operating costs were calculated by adding the cost of putting the test corpus into machine readable form, the cost of the actual machine time required to translate the texts, and the cost of revision time. Revision time accounted for 37 percent of the total cost of a final version of the machine translation; and since the revisors gave a generally favourable rating to the translations that the system did produce, much of this revision cost must be attributed to the time they spent in translating the units for which the machine produced no output. The direct operating cost of producing a revised translation of the 14,000 word test corpus using TAUM-AVIATION turned out to be $0.183 per word; the cost of human translation and revision of the same corpus was $0.145 per word. [1] Yet it was not this $0.038 a word difference that was most damaging for TAUM-AVlATION, for the evaluator noted that direct operating costs could reasonably be expected to decrease once the system was implanted in an operational context. What did prove fatal were the system's indirect operating costs, and particularly the cost of adding new dictionary entries. Based on a rough extrapolation of the rhythm at which the dictionary teams were working at the time, the evaluator estimated that a person could index no more than 450 new terms a year, at a cost of about $49 per term. Moreover, to amortize the cost of maintaining an eight-man operating team, the system would have to translate between five and six million words a year. On the possibility of eventually operating TAUM-AVIATION cost effectively, the evaluator was thus led to the following conclusions :"Il est impossible d'affirmer, à la lumière de la présente évaluation, que l'utilisation du système TAUM-AVlATION peut, dans un avenir prévisible, devenir rentable, c'est à dire coûter moins cher que la traduction humaine, principalement à cause de ses coûts indirects et des conséquences qui en découlent." p145"La nécessite de trouver annuellement 5 à 6 millions de mots à traduire pour rentabiliser partiellement l'exploitation du système rend inopportune la poursuite du développement sans envisager d'autres applications." pl49One of the objectives of the feasibility study (cf Gobeil 1981) requested by Treasury Board was precisely to determine whether TAUM-AVIATION could be extended to texts other than the hydraulics manuals for which it was designed. To that end, a 5800 word corpus taken from the electronics manuals of the CP-140 was submitted to the system, and the results compared with those from the March 1980 evaluation. This part of the study was not entirely conclusive, however; lack of time and resources prevented the translations produced from being revised and rated in the same rigorous manner as in the evaluation conducted by M. Gervais. Generally speaking, however, the results obtained on this electronics corpus were of comparable quality to those obtained on the hydraulics test the previous year. The performance of the system's grammars improved, but dictionary problems increased, as one would expect when texts in a new domain were being translated using entries conceived for hydraulics manuals.Another of the objectives of the feasibility study was to inventory the types of texts translated by the federal government, classifying them according to their syntactic complexity and extent of vocabulary in order to identify those most amenable to machine translation. This inventory showed that the Bureau did not regularly translate five to six million words a year of maintenance manuals in hydraulics or other related domains. Recall that this was the volume that TAUM-AVIATION would have to translate in order to be operated cost effectively.Treasury Board had also requested that the feasibility study determine whether there were any other commercial MT systems which could help the Bureau meet its needs. A detailed questionnaire was therefore prepared and sent to twelve suppliers or potential suppliers of MT systems. Those that translated from English to French or from French to English were asked if they would be willing to submit their systems to a practical evaluation. The suppliers of three systems agreed: ALPS, SYSTRAN II and WEIDNER. Each was given the same 6300 word corpus to translate, comprised of extracts from trademark journals, staffing documents and the maintenance manuals of the CP-l40. The raw machine output was submitted to revisors who were asked to rank the different versions and to note the time it took them to produce an acceptable translation. A unit cost for translation and revision was then calculated for each system. The authors of the feasibility study found that the direct operating cost of producing a revised translation using each of the above-mentioned systems was lower than the cost of human translation and revision as determined on the 1980 evaluation, and thus lower than the unit cost of producing revised translations using TAUM-AVIATION.[2] The cost of making new dictionary entries was also found to be significantly lower than the $49 per entry estimated for TAUM-AVIATION. In terms of the quality of the translations produced, however, the results were far less satisfactory. In fact, the revisors refused to rank the translations in terms of technical accuracy, saying that they were all "pénible à reviser", or arduous to revise. In many cases, they did not modify the machine translation but found it easier to retranslate directly from the original. Moreover, none of the systems delivered the increase in translator productivity that their suppliers advertised. The authors of the feasibility study were thus unable to recommend that the Bureau purchase or make use of any of the three systems for its regular operations without further studies being conducted on much larger samples.The feasibility study was completed in May 1981. In September of that year, TAUM was disbanded for want of funds. Former TAUMists, like myself, are often asked how the Translation Bureau could abandon machine translation in Canada. This is somewhat of a misconception, based on a misunderstanding of the relationship between TAUM and the Trans1ation Bureau. The contract that the Bureau signed with the University of Montreal in 1975 was for the development within three years, of a system that could eventually be used to translate the maintenance manuals of the CP-140. Following the presentation of that system to the Bureau in 1979, TAUM was granted an additional one-year contract to continue the development and documentation of the system. On the basis of the evaluation conducted in 1980 and the feasibility study conducted in 1981, the Bureau decided, in September 1981, to abandon its objective of using TAUM-AVIATION to translate the manuals of the CP-140. From the point of view of the Canadian taxpayer, this decision was certainly justifiable. Between 1976 and 1980, the Bureau had invested over $2.7 million in MT. In return, it found itself the owner of a system whose cost-effectiveness had not been demonstrated. The Bureau therefore decided that it needed a period of reflection in order to draw the lessons of its recent involvement in MT. As for TAUM, it made the unfortunate error of putting all its eggs in the same basket. When the AVIATION contract with the Bureau ended, it found itself with no other source of funding.None of this is intended to suggest that TAUM-AVIATION was a failure. On the contrary, from a scientific point of view, the project carried many of the principles of second generation MT to their logical conclusion. The result was an extremely sophisticated system that produced fully automatic, high quality translations of texts in a well-defined sublanguage. However, TAUM-AVIATION was not, in the fall of 1981, a system that was ready for large-scale operational production; nor, given the high cost of extending its dictionaries, was it a system that could easily become economically viable. It is important to ask why this is so. Why was dictionary construction so costly in TAUM-AVIATION? In particular, was this due to some fundamental flaw in TAUM's basic approach?Under the second generation sublanguage approach employed at TAUM, a MT system is designed for a specific sublanguage, not for arbitrary texts from any domain. Such a system seeks to take advantage of each sublanguage's lexical, syntactic, semantic and textual restrictions in order to achieve maximum disambiguating power.[3] In AVIATION'S analysis dictionary, for example, the entries for predicate words defined co-occurrence restrictions on their arguments; these restrictions were stated in terms of semantic classes that were found to be particularly relevant for texts in hydraulics maintenance. At transfer, each potential context thus defined for a lexical unit could then be used to state the necessary translation tests.[4] Writing dictionary entries under this approach requires an extensive corpus that is representative of texts in the particular sublanguage, and a careful study of that corpus in order to first determine the relevant semantic classes and then establish each lexical unit's co-occurrence restrictions and translation tests. This is a time-consuming and therefore a costly process, but one that I would maintain is necessary if a system is to automatically produce high quality translations in a sublanguage as complex as aircraft maintenance manuals. To take just one example, consider the following typical maintenance command:(1) Remove fitting and drain plug.This sentence is syntactically ambiguous, ie it could be parsed as a conjunction of imperatives, in which case drain is taken to be a verb, or as a single imperative with a conjoined object, in which case drain is taken to be a noun. The only way of blocking the former incorrect analysis in a second generation MT system is to specify in the source language dictionary entry for the verb drain that plugs are not drainable, although such objects as tanks and reservoirs are. In other words, a syntactic enumeration of permissible structures is often insufficient; the system must be provided with semantic features that distinguish between such objects as plugs and reservoirs, as well as with a specification of each predicate's complementation. This is a fairly fine semantic distinction, but one that would appear to be necessary for the automatic translation of hydraulics maintenance manuals.A system like WEIDNER does not provide for selectional restrictions on predicate arguments. The only source language information given in its dictionary entries is the lexical unit's syntactic category. WEIDNER can distinguish homographs like drain in certain syntactic configurations, eg when the word is immediately preceded by an article; but not in a configuration like that in (1), where both a verb and a noun may occur after the conjunction (cf: Remove fitting and drain tank. Obviously, new dictionary entries will be relatively inexpensive in such a system. What will be expensive is revision. An interactive system like ALPS may, for sentences such as (1), interrupt the analysis process and ask a human operator to help it resolve the ambiguity. This too takes time, but one would normally expect to be compensated by less revision effort. Unfortunately, in the feasibility study and in a subsequent operational trial of ALPS at the Translation Bureau, this did not prove to be the case.[5] Moreover, the human operators tended to find it frustrating to be asked the same sorts of questions over and over again.it may be objected that revision costs for TAUM-AVIATION were also found to be high in the 1980 evaluation. This is true, but not for the same reasons as the other systems tested in the feasibility study. The translations produced by TAUM-AVIATION were generally of good quality, and certainly revisable. The main problem was that the system failed to produce translations for too high a proportion of units, and these had to be translated by the revisor. The other systems tested in the feasibility study nearly always produced translations, but these were too often agrammatical and hence unrevisable.The ideal solution, of course, would be not to have to sacrifice quality, but to increase the proportion of units translated by TAUM-AVIATION, by making the system more fail-safe, ie more resistant to minor errors in its dictionaries or grammars. Ways in which this could be done are discussed in Isabelle 1981 . One of the suggestions made there is to build a sort of monitor into the analysis component which would be activated when no analysis was produced and which would retake the analysis of the current unit after temporarily neutralizing a series of semantic or syntactic restrictions. In this way, a translation would be produced for a much higher proportion of units.[6] Something as straightforward as linking TAUM-AVIATION to a word processor would also facilitate revision and lower overall translation costs.Dictionary construction in such a system, however, will always be relatively costly, or at least costlier than in systems like ALPS or WEIDNER. This is not due to a flaw in TAUM's basic approach, but simply because TAUM aimed for a higher level of comprehension of the texts it automatically translated than did ALPS or WEIDNER. We saw this in the example discussed above. In TAUM-AVIATION, the pivotal representations that were the output of analysis and the input to transfer sought to identify the basic predicate-argument structure of each sentence and distinguish between the various meanings of words. [7] This was thought to be a minimum without which the system would not be able to consistently produce revisable translations. We also saw the kind of dictionary effort that was required to attain this level of comprehension. Another factor which influences the cost of dictionary construction in a second generation system like TAUM-AVIATION is the complexity of the sublanguage being translated. The more varied the range of structures in the sublanguage, the longer it takes to describe lexical co-occurrence restrictions. The larger the vocabulary of the sublanguage, the more homography it tends to display; these homographs must be distinguished in the system's dictionaries if they are not to reappear as problems at revision. Indeed, one of the principal lessons to be drawn from the experience of TAUM-AVIATION is that certain sublanguages are so complex that it is extremely difficult to attain the level of comprehension necessary for their automatic translation by means of second generation technology.Since the close of the AVIATION project in 1981, the Translation Bureau has not lost interest in machine translation. With a workload approaching 300 million words a year, it cannot afford to do so. Moreover, the Bureau still believes in the utility of the second generation sublanguage approach. The problem, as the Bureau now sees it, is to judiciously select the sublanguage to which a second generation system is to be applied. Weather bulletins combine the ideal characteristics for machine translation: high volume coupled with restricted syntax and vocabulary. Aircraft maintenance manuals are on the opposite end of the complexity scale; in fact, they may be too complex for second generation systems. But what about the many sublanguages in between? Which of these might be amenable to second generation technology and what other factors, aside from a suitably defined notion of linguistic complexity, need to be considered in order to guarantee success? The Bureau has been subsidizing research into these questions, conducted by Professor Richard Kittredge of the University of Montreal. Professor Kittredge examined samples of 17 varieties of texts which seemed likely or desirable candidates for machine-aided translation within the Bureau, and identified a number of sublanguages that could be handled using current technology, (cf Kittredge 1983) The Bureau will be conducting a feasibility study of one such application this year, and if the results are satisfactory, the development of the first of a series of small-scale MT systems could begin in 1984-85. Other MT-related projects for the current year include the transfer of METEO onto a micro-computer and the introduction of TERMIUM III, the new version of the government's computerized terminology bank.The Bureau now has a better understanding of the limits of second generation MT systems, and recognizes that fundamental research will be required before development can begin on the next generation of systems. To help orient the direction of such research, the Translation Bureau and the Department of Communications recently commissioned a 1arge-scale study into the current state of natural language processing and artificial intelligence, with special emphasis on applications to machine translation and other related fields.This study is expected to contain recommandations on the manner in which the government can best co-operate with universities and private enterprise in order to reactivate MT in Canada. All those interested in machine translation in the country are anxiously awaiting the publication of the final report. Notes 1-It should noted, however, that overall, the revised human version took two and a half times longer to produce than the revised machine version. The final revised versions were judged comparable in quality by a number of potential users, including Air Canada and the Department of National Defence.Over the three types of texts, WEIDNER was found to be the least costly at $0.089 per word; next was ALPS at $0.113; and finally, SYSTRAN at $0.143 per word.For a detailed description of the sublanguage of aircraft maintenance manuals and a discussion of the relevance of sublanguages for automatic translation, see Lehrberger 1978.For a discussion of the work of the translator in a second generation system like TAUM-AVIATION, see Chevalier et al 1981.A report on the six-month operational trial of ALPS at the Bureau should be available shortly.The units thus produced would of course be flagged for special attention by the revisor.See Lehrberger 1981 for a discussion of TAUM's linguistic model. Lehrberger, J (1978)
null
null
null
null
Main paper: : This enormous work load accounts for the Bureau's long-standing interest in machine and machine-aided translation. The Bureau began to subsidize research and development in MT more than ten years ago. It was the Translation Bureau that funded the development of the METEO system by TAUM, the University of Montreal research group in machine translation between 1974 and 1976; and the Bureau has been responsible for the operation of METEO since then. METEO translates weather bulletins from English to French at a rate of over 11,000 words a day, 365 days a year, and is recognized worldwide as one of the major successes, in the field of machine translation. (For a detailed description of METEO, see Chevalier et al, 1978.) In 1976, the Translation Bureau commissioned TAUM to develop the prototype of a new system to translate the maintenance manuals of the CP-140, a coastal patrol aircraft that the department of National Defence had just ordered and was to receive in 1980. The technical documentation for that aircraft, according to unofficial sources amounted to some 90 million words. For anyone who has never seen maintenance manuals for equipment as sophisticated as an aircraft, it should be stressed that only highly specialized translators can translate them; indeed, the educated layman can barely understand them. It was estimated in 1976 that it would take four specialists ten years to complete the French translation of the technical documentation for the CP-140, by which time the aircraft would be obsolete. It was for this reason that the Translation Bureau looked to machine translation for help. TAUM's initial estimate was that it would take about three years to develop the prototype of this new system, but this turned out to be overly optimistic. New high-level programming languages had to be designed and tested. Organizational problems arose in co-ordinating the activities of one team in Montreal and another in Ottawa. To complicate matters, TAUM lost its director early on in the project. Additional personnel had to be recruited and a great deal of time invested in training. Only then could the task of writing the extensive grammars and dictionaries required for the complicated sublanguage of aircraft maintenance manuals be undertaken in earnest. With the approval of the Translation Bureau, TAUM decided to base the construction of the new prototype on a 70,000 word corpus constituted of extracts from hydraulics manuals of several different aircraft.In March 1979, TAUM gave a public demonstration of the new system, baptised TAUM-AVIATION. The demonstration itself was a success; under television lights, it was shown that AVIATION could translate the hydraulics manuals of the CP-140. However, it had become obvious that without large additional investments, AVIATION would not be able to translate the other manuals of the CP-140 and in any event, not in time for the expected delivery of the plane in 1980. Several months later, the federal Treasury Board approved a new contract between the Translation Bureau and the University of Montreal, on the condition that TAUM-AVIATION be subjected to independent evaluation and that a feasibility study be conducted on the system's extendibility.The evaluation of TAUM-AVIATION (cf Gervais 1980) was conducted in March 1980 and had two main objectives: 1) to assess the system's linguistic performance, and 2) to analyse its development and operating costs. Samples taken from the hydraulics manuals of the CP-140, the Lockheed 1011 and a tank recovery vehicle were submitted to TAUM-AVIATION for translation and then revised by specialists from the Translation Bureau and an outside translation firm. The revisors were also asked to rate the intelligibility, fidelity and style of each sentence translated by the system, following a procedure used for an evaluation of the CEC's version of SYSTRAN in 1977. The same three texts were translated by qualified technical translators, two from the Bureau and one an outside freelancer, and these translations too were revised and rated. This allowed the evaluator to compare the ratings assigned to the machine translation and to the three human versions. As it turned out, the translations produced by TAUM-AVIATION received a satisfactory overall rating, about 80 percent of the rating assigned to the human versions. However, the system did not produce any translation for about one third of the sentences, titles or table entries that made up the test corpus.At first glance, the percentage of units for which TAUM-AVIATION produced no translation may appear extremely high. The principal evaluator, however, did not find this alarming:"Il faut se rappeler cependant que si le système TAUM-AVIATION ne produit aucune sortie dans certains cas, cela ne signifie pas qu'il en est incapable. Cela découle plutôt d'une décision des concepteurs du système. Ceux-ci ont jugé qu'il valait mieux ne rien produire plutôt que de risquer de produire du texte incompréhensible. Il est fort possible que dans un contexte d'exploitation il s'avère préférable de procéder autrement." p59The risk of incomprehensible output is minimized in a second generation system like TAUM-AVIATION by basing the translation phase on a complete analysis of each source language input unit. Whenever a unit did not receive a complete analysis in TAUM-AVIATION, no translation was produced. As the evaluator points out, this is a perfectly reasonable strategy ... for a system under development. What happened in the evaluation was that many of the errors that prevented units from being analysed were caused by incomplete or incorrectly stated selectional restrictions in the analysis dictionary. In an operational context, it would not have been difficult to modify the system so that this sort of minor, local error did not always block the translation of an entire unit.The proportion of untranslated units did prove to be significant, however, when it came to establishing the direct operating costs of producing translations using TAUM-AVIATION. Direct operating costs were calculated by adding the cost of putting the test corpus into machine readable form, the cost of the actual machine time required to translate the texts, and the cost of revision time. Revision time accounted for 37 percent of the total cost of a final version of the machine translation; and since the revisors gave a generally favourable rating to the translations that the system did produce, much of this revision cost must be attributed to the time they spent in translating the units for which the machine produced no output. The direct operating cost of producing a revised translation of the 14,000 word test corpus using TAUM-AVIATION turned out to be $0.183 per word; the cost of human translation and revision of the same corpus was $0.145 per word. [1] Yet it was not this $0.038 a word difference that was most damaging for TAUM-AVlATION, for the evaluator noted that direct operating costs could reasonably be expected to decrease once the system was implanted in an operational context. What did prove fatal were the system's indirect operating costs, and particularly the cost of adding new dictionary entries. Based on a rough extrapolation of the rhythm at which the dictionary teams were working at the time, the evaluator estimated that a person could index no more than 450 new terms a year, at a cost of about $49 per term. Moreover, to amortize the cost of maintaining an eight-man operating team, the system would have to translate between five and six million words a year. On the possibility of eventually operating TAUM-AVIATION cost effectively, the evaluator was thus led to the following conclusions :"Il est impossible d'affirmer, à la lumière de la présente évaluation, que l'utilisation du système TAUM-AVlATION peut, dans un avenir prévisible, devenir rentable, c'est à dire coûter moins cher que la traduction humaine, principalement à cause de ses coûts indirects et des conséquences qui en découlent." p145"La nécessite de trouver annuellement 5 à 6 millions de mots à traduire pour rentabiliser partiellement l'exploitation du système rend inopportune la poursuite du développement sans envisager d'autres applications." pl49One of the objectives of the feasibility study (cf Gobeil 1981) requested by Treasury Board was precisely to determine whether TAUM-AVIATION could be extended to texts other than the hydraulics manuals for which it was designed. To that end, a 5800 word corpus taken from the electronics manuals of the CP-140 was submitted to the system, and the results compared with those from the March 1980 evaluation. This part of the study was not entirely conclusive, however; lack of time and resources prevented the translations produced from being revised and rated in the same rigorous manner as in the evaluation conducted by M. Gervais. Generally speaking, however, the results obtained on this electronics corpus were of comparable quality to those obtained on the hydraulics test the previous year. The performance of the system's grammars improved, but dictionary problems increased, as one would expect when texts in a new domain were being translated using entries conceived for hydraulics manuals.Another of the objectives of the feasibility study was to inventory the types of texts translated by the federal government, classifying them according to their syntactic complexity and extent of vocabulary in order to identify those most amenable to machine translation. This inventory showed that the Bureau did not regularly translate five to six million words a year of maintenance manuals in hydraulics or other related domains. Recall that this was the volume that TAUM-AVIATION would have to translate in order to be operated cost effectively.Treasury Board had also requested that the feasibility study determine whether there were any other commercial MT systems which could help the Bureau meet its needs. A detailed questionnaire was therefore prepared and sent to twelve suppliers or potential suppliers of MT systems. Those that translated from English to French or from French to English were asked if they would be willing to submit their systems to a practical evaluation. The suppliers of three systems agreed: ALPS, SYSTRAN II and WEIDNER. Each was given the same 6300 word corpus to translate, comprised of extracts from trademark journals, staffing documents and the maintenance manuals of the CP-l40. The raw machine output was submitted to revisors who were asked to rank the different versions and to note the time it took them to produce an acceptable translation. A unit cost for translation and revision was then calculated for each system. The authors of the feasibility study found that the direct operating cost of producing a revised translation using each of the above-mentioned systems was lower than the cost of human translation and revision as determined on the 1980 evaluation, and thus lower than the unit cost of producing revised translations using TAUM-AVIATION.[2] The cost of making new dictionary entries was also found to be significantly lower than the $49 per entry estimated for TAUM-AVIATION. In terms of the quality of the translations produced, however, the results were far less satisfactory. In fact, the revisors refused to rank the translations in terms of technical accuracy, saying that they were all "pénible à reviser", or arduous to revise. In many cases, they did not modify the machine translation but found it easier to retranslate directly from the original. Moreover, none of the systems delivered the increase in translator productivity that their suppliers advertised. The authors of the feasibility study were thus unable to recommend that the Bureau purchase or make use of any of the three systems for its regular operations without further studies being conducted on much larger samples.The feasibility study was completed in May 1981. In September of that year, TAUM was disbanded for want of funds. Former TAUMists, like myself, are often asked how the Translation Bureau could abandon machine translation in Canada. This is somewhat of a misconception, based on a misunderstanding of the relationship between TAUM and the Trans1ation Bureau. The contract that the Bureau signed with the University of Montreal in 1975 was for the development within three years, of a system that could eventually be used to translate the maintenance manuals of the CP-140. Following the presentation of that system to the Bureau in 1979, TAUM was granted an additional one-year contract to continue the development and documentation of the system. On the basis of the evaluation conducted in 1980 and the feasibility study conducted in 1981, the Bureau decided, in September 1981, to abandon its objective of using TAUM-AVIATION to translate the manuals of the CP-140. From the point of view of the Canadian taxpayer, this decision was certainly justifiable. Between 1976 and 1980, the Bureau had invested over $2.7 million in MT. In return, it found itself the owner of a system whose cost-effectiveness had not been demonstrated. The Bureau therefore decided that it needed a period of reflection in order to draw the lessons of its recent involvement in MT. As for TAUM, it made the unfortunate error of putting all its eggs in the same basket. When the AVIATION contract with the Bureau ended, it found itself with no other source of funding.None of this is intended to suggest that TAUM-AVIATION was a failure. On the contrary, from a scientific point of view, the project carried many of the principles of second generation MT to their logical conclusion. The result was an extremely sophisticated system that produced fully automatic, high quality translations of texts in a well-defined sublanguage. However, TAUM-AVIATION was not, in the fall of 1981, a system that was ready for large-scale operational production; nor, given the high cost of extending its dictionaries, was it a system that could easily become economically viable. It is important to ask why this is so. Why was dictionary construction so costly in TAUM-AVIATION? In particular, was this due to some fundamental flaw in TAUM's basic approach?Under the second generation sublanguage approach employed at TAUM, a MT system is designed for a specific sublanguage, not for arbitrary texts from any domain. Such a system seeks to take advantage of each sublanguage's lexical, syntactic, semantic and textual restrictions in order to achieve maximum disambiguating power.[3] In AVIATION'S analysis dictionary, for example, the entries for predicate words defined co-occurrence restrictions on their arguments; these restrictions were stated in terms of semantic classes that were found to be particularly relevant for texts in hydraulics maintenance. At transfer, each potential context thus defined for a lexical unit could then be used to state the necessary translation tests.[4] Writing dictionary entries under this approach requires an extensive corpus that is representative of texts in the particular sublanguage, and a careful study of that corpus in order to first determine the relevant semantic classes and then establish each lexical unit's co-occurrence restrictions and translation tests. This is a time-consuming and therefore a costly process, but one that I would maintain is necessary if a system is to automatically produce high quality translations in a sublanguage as complex as aircraft maintenance manuals. To take just one example, consider the following typical maintenance command:(1) Remove fitting and drain plug.This sentence is syntactically ambiguous, ie it could be parsed as a conjunction of imperatives, in which case drain is taken to be a verb, or as a single imperative with a conjoined object, in which case drain is taken to be a noun. The only way of blocking the former incorrect analysis in a second generation MT system is to specify in the source language dictionary entry for the verb drain that plugs are not drainable, although such objects as tanks and reservoirs are. In other words, a syntactic enumeration of permissible structures is often insufficient; the system must be provided with semantic features that distinguish between such objects as plugs and reservoirs, as well as with a specification of each predicate's complementation. This is a fairly fine semantic distinction, but one that would appear to be necessary for the automatic translation of hydraulics maintenance manuals.A system like WEIDNER does not provide for selectional restrictions on predicate arguments. The only source language information given in its dictionary entries is the lexical unit's syntactic category. WEIDNER can distinguish homographs like drain in certain syntactic configurations, eg when the word is immediately preceded by an article; but not in a configuration like that in (1), where both a verb and a noun may occur after the conjunction (cf: Remove fitting and drain tank. Obviously, new dictionary entries will be relatively inexpensive in such a system. What will be expensive is revision. An interactive system like ALPS may, for sentences such as (1), interrupt the analysis process and ask a human operator to help it resolve the ambiguity. This too takes time, but one would normally expect to be compensated by less revision effort. Unfortunately, in the feasibility study and in a subsequent operational trial of ALPS at the Translation Bureau, this did not prove to be the case.[5] Moreover, the human operators tended to find it frustrating to be asked the same sorts of questions over and over again.it may be objected that revision costs for TAUM-AVIATION were also found to be high in the 1980 evaluation. This is true, but not for the same reasons as the other systems tested in the feasibility study. The translations produced by TAUM-AVIATION were generally of good quality, and certainly revisable. The main problem was that the system failed to produce translations for too high a proportion of units, and these had to be translated by the revisor. The other systems tested in the feasibility study nearly always produced translations, but these were too often agrammatical and hence unrevisable.The ideal solution, of course, would be not to have to sacrifice quality, but to increase the proportion of units translated by TAUM-AVIATION, by making the system more fail-safe, ie more resistant to minor errors in its dictionaries or grammars. Ways in which this could be done are discussed in Isabelle 1981 . One of the suggestions made there is to build a sort of monitor into the analysis component which would be activated when no analysis was produced and which would retake the analysis of the current unit after temporarily neutralizing a series of semantic or syntactic restrictions. In this way, a translation would be produced for a much higher proportion of units.[6] Something as straightforward as linking TAUM-AVIATION to a word processor would also facilitate revision and lower overall translation costs.Dictionary construction in such a system, however, will always be relatively costly, or at least costlier than in systems like ALPS or WEIDNER. This is not due to a flaw in TAUM's basic approach, but simply because TAUM aimed for a higher level of comprehension of the texts it automatically translated than did ALPS or WEIDNER. We saw this in the example discussed above. In TAUM-AVIATION, the pivotal representations that were the output of analysis and the input to transfer sought to identify the basic predicate-argument structure of each sentence and distinguish between the various meanings of words. [7] This was thought to be a minimum without which the system would not be able to consistently produce revisable translations. We also saw the kind of dictionary effort that was required to attain this level of comprehension. Another factor which influences the cost of dictionary construction in a second generation system like TAUM-AVIATION is the complexity of the sublanguage being translated. The more varied the range of structures in the sublanguage, the longer it takes to describe lexical co-occurrence restrictions. The larger the vocabulary of the sublanguage, the more homography it tends to display; these homographs must be distinguished in the system's dictionaries if they are not to reappear as problems at revision. Indeed, one of the principal lessons to be drawn from the experience of TAUM-AVIATION is that certain sublanguages are so complex that it is extremely difficult to attain the level of comprehension necessary for their automatic translation by means of second generation technology.Since the close of the AVIATION project in 1981, the Translation Bureau has not lost interest in machine translation. With a workload approaching 300 million words a year, it cannot afford to do so. Moreover, the Bureau still believes in the utility of the second generation sublanguage approach. The problem, as the Bureau now sees it, is to judiciously select the sublanguage to which a second generation system is to be applied. Weather bulletins combine the ideal characteristics for machine translation: high volume coupled with restricted syntax and vocabulary. Aircraft maintenance manuals are on the opposite end of the complexity scale; in fact, they may be too complex for second generation systems. But what about the many sublanguages in between? Which of these might be amenable to second generation technology and what other factors, aside from a suitably defined notion of linguistic complexity, need to be considered in order to guarantee success? The Bureau has been subsidizing research into these questions, conducted by Professor Richard Kittredge of the University of Montreal. Professor Kittredge examined samples of 17 varieties of texts which seemed likely or desirable candidates for machine-aided translation within the Bureau, and identified a number of sublanguages that could be handled using current technology, (cf Kittredge 1983) The Bureau will be conducting a feasibility study of one such application this year, and if the results are satisfactory, the development of the first of a series of small-scale MT systems could begin in 1984-85. Other MT-related projects for the current year include the transfer of METEO onto a micro-computer and the introduction of TERMIUM III, the new version of the government's computerized terminology bank.The Bureau now has a better understanding of the limits of second generation MT systems, and recognizes that fundamental research will be required before development can begin on the next generation of systems. To help orient the direction of such research, the Translation Bureau and the Department of Communications recently commissioned a 1arge-scale study into the current state of natural language processing and artificial intelligence, with special emphasis on applications to machine translation and other related fields.This study is expected to contain recommandations on the manner in which the government can best co-operate with universities and private enterprise in order to reactivate MT in Canada. All those interested in machine translation in the country are anxiously awaiting the publication of the final report. Notes 1-It should noted, however, that overall, the revised human version took two and a half times longer to produce than the revised machine version. The final revised versions were judged comparable in quality by a number of potential users, including Air Canada and the Department of National Defence.Over the three types of texts, WEIDNER was found to be the least costly at $0.089 per word; next was ALPS at $0.113; and finally, SYSTRAN at $0.143 per word.For a detailed description of the sublanguage of aircraft maintenance manuals and a discussion of the relevance of sublanguages for automatic translation, see Lehrberger 1978.For a discussion of the work of the translator in a second generation system like TAUM-AVIATION, see Chevalier et al 1981.A report on the six-month operational trial of ALPS at the Bureau should be available shortly.The units thus produced would of course be flagged for special attention by the revisor.See Lehrberger 1981 for a discussion of TAUM's linguistic model. Lehrberger, J (1978) Appendix:
null
null
null
null
{ "paperhash": [ "chevalier|la_traductologie_appliquée_à_la_traduction_automatique" ], "title": [ "La traductologie appliquée à la traduction automatique" ], "abstract": [ "La traductologie a pour objet de construire des modeles representant la competence du locuteur bilingue a mesurer l'equivalence d'enonces appartenant a deux langues naturelles differentes." ], "authors": [ { "name": [ "M. Chevalier", "P. Isabelle", "François Labelle", "Claude Lainé" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "120578486" ], "intents": [ [] ], "isInfluential": [ false ] }
null
492
0
null
null
null
null
null
null
null
null
452c4d62cd51d1979e9b6362aff3ef6591fe270e
69348198
null
Robust processing in machine translation
We attempt to develop a general theory of robust processing for natural language, and especially Machine Translation purposes. That is, a general characterization of methods by which processes can be made resistant to malfunctioning of various kinds. We distinguish three sources of malfunction: (a) deviant inputs, (b) deviant outputs, and (c) deviant pairings of input and output, and describe the assumptions that guide our discussion (sections 1 and 2). We classify existing approaches to (a)-and (b)-robustness, noting that not only do such approaches fail to provide a solution to (c)-type problems, but that the natural consequence of these solutions is to make (c)-type malfunctions harder to detect (section 3) In the final section (4) we outline possible solutions to (c)-type malfunctions.
{ "name": [ "Arnold, Doug and", "Johnson, Rod" ], "affiliation": [ null, null ] }
null
null
Proceedings of the International Conference on Methodology and Techniques of Machine Translation: Processing from words to language
1984-02-01
8
0
null
We distinguish three sources of malfunction: (a) deviant inputs, (b) deviant outputs, and (c) deviant pairings of input and output, and describe the assumptions that guide our discussion (sections 1 and 2). We classify existing approaches to (a)-and (b)-robustness, noting that not only do such approaches fail to provide a solution to (c)-type problems, but that the natural consequence of these solutions is to make (c)-type malfunctions harder to detect (section 3) In the final section (4) we outline possible solutions to (c)-type malfunctions.To begin with it is useful to narrow the discussion somewhat by noting a number of ways in which the problem of system fragility in the kind of environment we are concerned with differs from that encountered in Natural Language Processing (NLP) generally. For example, we are not particularly concerned with failures that result from different kinds of mis-spelling, and mis-segmentation in the input, since we imagine that texts submitted for translation will already have been processed for such errors (perhaps automatically), nor are we concerned with dialogue related problems such as highly fragmentary input, interjections, or false starts, though these clearly present serious difficulties in some kinds of NLP (e.g. in front ends to Expert Systems).On the other hand, certain common solutions to problems of fragility are obviously not open to us. In particular, the aim of a general theory for robust MT excludes the use of highly domain specific knowledge such as is exploited by many special purpose NLP systems (cf. Hayes and Mouradian [5] ), and the fact that we are concerned with translation militates against the disregard for input that is characteristic of some robust systems (it is displayed in an extreme form in e.g. PARRY [2] ): it is not enough that an MT system behaves robustly, producing some output, it should produce output that stands as nearly as possible in the 'translation of relation to its inputs. Finally, though we are not discussing the issue of developmental robustness here, we will obviously prefer robust processing to preserve a high degree of transparency -we take it as axiomatic that general purpose MT systems must be capable of extension and repair.From the point of view we adopt, it is possible to regard an MT system as a set of processes implementing relations between representations (input and output texts can be considered representations of themselves). We distinguish three different kinds of relation:(1) The correct, or intended relation R between representations. E.g. the relation 'is a (correct) translation of, which pairs texts in one language with texts in another. We have only pre-theoretical and rather vague ideas about Rs, in virtue of being bi-lingual speakers, or having some intuitive grasp of the semantics of artificial representations.(2) A theoretical construct T that is supposed to embody R.(3) A process P that is supposed to implement T.The need to distinguish R from T and P is obvious: it is possible to perform evaluations of a system only by comparing actual performance of P against ideas about R. However, it is also necessary to distinguish T from P. In some cases T may exist as a separate entity (e.g. if P is a process that implements an explicit grammar of some sort) so that the need to separate evaluation of T (the grammar) and P (the programs) is obvious. However, even when T does not exist as an explicit set of propositions, it is useful to consider it separately, as an interface between pre-theory and process. It is a fact that every actual process implements a theory of inputs and outputs, however implicit, and the existence of such an interface is essential to our presentation. We want to distinguish carefully between evaluation of T (e.g. checking the adequacy of ones representational devices) and evaluation of P (checking how far an implementation delivers representation). We are concerned with automatic approaches to error detection and repair, and we can imagine no automatic method for checking the correctness of a representational device (T). For this reason we want to ignore questions of how far T can be considered a good instantiation of R.Thus, in what follows, we will simply assume that T is both welldefined and a correct embodiment of R (e.g. in the case where T is a theory of translation between L and L', this assumption says that the membership of L, and L' is well-defined and members of L are paired with their correct translations in L'). Realistically, in the context of NLP, the assumption of the correctness of T in relation to R will amount to assuming it to be the most correct available -that all T' distinct from T are in some way less correct -in fact, this assumption is sufficient for our purposes.It will considerably simplify the exposition below if T can be regarded as a function; this can be achieved if we abstract away from the phenomenon of ambiguity (it will not matter if we regard T as a relation between individual representations, or between individual representations and sets of representations which are equivalent). While we think this simplification is essential to the exposition here, it is regrettable, since the inter-relation of ambiguity and robustness is an important matter.Finally, we will assume that hardware and low-level software operate error-free and that P can be guaranteed to terminate for all inputs: there are well known ways to ensure system robustness at this kind of level (e.g. termination is guaranteed by simply restricting allocation of resources to P), so that this assumption seems unproblematic.Given these assumptions the possible sources of error in a system or process P are restricted to two: Problem 1: Correctness of P. P is not a correct implementation of T.One might expect this situation in cases where T is extremely complex, which we consider will be a common situation in NLP and MT -even for domains which are reasonably well understood, theories are extremely complex, and there are severe problems devising implementations of them.2:_ Completeness of_ T. T, while correct, is not complete.We have assumed that T is correct, i.e. that it correctly pairs all items in its domain with items in its range.It is not a consequence of this assumption that the domain of T is co-extensive with the set of actual inputs to P.In fact, in realistic NLP we expect the set of actual inputs regularly to be a strict superset of the domain of T, for non-trivial Ts.Even if T were to include what amounts to a complete grammatical description of the input language this would be so, since we can expect some inputs of only marginal grammaticality, and all languages allow scope for creativity that is under determined by the rule system (e.g. creation of new, derivationally simple terms).In principle, it might turn out that a combination of advances in understanding together with restrictions on input might eliminate both sources of error.It seems reasonable to disregard this possibility, and assume that robust processing will always be necessary.We can now state the possible manifestations of system fragility described in the introduction more concisely: case (a): P(x)=Ø. i.e. P halts producing Ø output for input x.This is the effect of illegal (unforeseen) input. case (b): P(x)=z where z is not a legal output for P according to T. case (c): P(x)=y, where y is a legal output for P according to T, but is not the intended output according to T. i.e. y is in the range of T, but y≠T(x).We should also be precise about the alternative to malfunctioning: 'correct' processing, and in particular processing which avoids (c)-type malfunctions. We will speak of outputs being 'T-correct' when they are the results of such processing.By abstracting away from ambiguity, we are able to consider T to be a function.It does not follow that the inverse of T (that is, T -1) is also a function, since T may be many-toone. This complicates the definition of T-correctness slightly:Given P(x) = y, and a set W such that for all w in W, T(w) = y, then y is T-correct with respect to w iff x is a member of W.When x is not a member of W, there is a (c)-type error.3. Existing approaches.We now present a classification of existing solutions to instances of (a)-and (b)-type fragility. None of these provides a solution to (c)-type fragility.Case-(a) errors: 'input' robustness:Case (a) errors, where P halts without producing any output for input x, have their source in a mismatch between the expectations of P and the data it is presented with. They are the most commonly considered in the literature. There are two basic approaches to making systems 'input robust':(i) to call some alternative process P' to manipulate the data so that it satisfies the expectations of P: for example, the LIFER [6] approach to elliptical input involves attempting to restore the ellipsis, so that the input can be processed by the normal rules. Notice that P' cannot guarantee to do more than make x formally acceptable to P, which will generally lead to (b)-and (c)-type problems, as later processes find inconsistent or incomplete information. The alternative would be fortuitous, and not very likely:(1) that P and P' together simply constitute a more correct implementation of T, hence solving problem 1; (2) that P and P' together implement a theory T' which differs from T only in having a larger domain, hence solving problem 2.Thus, though if it is successful, this strategy will eliminate case (a) errors, case (b) and (c) errors will remain, and are likely to be more wide-spread.(ii) Provide some mechanism for modifying the expectations of P: e.g. by calling some alternative process P' which embodies weaker expectations about data, or by attempting some temporary re-arrangement of P (cf. Kwasny and Sondheimer [7] ), or simply relaxing P's requirements on inputs (as in 'Preference' type approaches, (Wilks [9] )). Variants of this approach are extremely common: (cf. Weischedel and Black [8] , HEARSAY [4] , and in general, systems that favour a bottom up approach to exception processing). The effect of this is to create some new process P' which accepts a superset of the inputs of P. Again, it is unlikely that this will simply yield a more correct implementation of T. It is more likely that P' implements a new theoretical construct T', distinct from T, and with a wider domain.If this is successful, it will eliminate type (a)-errors, but now the existence of type-b and c errors is likely: given that T and T' are different, they may well differ in terms of range, as well as domain, and anything which P' delivers outside the range of T constitutes a (b)-or (c)type error.We return to the case where the ranges of T and T' are known to coincide in section 4.(Notice, incidentally, that if making P robust does involve implementing an alternative to T, then the assumption about the correctness of the theory in relation to R is no longer valid, so that even if our processor is guaranteed to deliver what the theory T' predicts it should, there is no guarantee that this is what is really intended (i.e. what is correct for R). Of course, it might, in principle turn out that T' actually better approximates to R, and performs better by accident -such accidents are extremely improbable, and we think we can disregard them).Case-(b) errors, where the output of P is ill-formed according to T can be trapped straightforwardly, by imposing a 'goal filter' or well-formedness check on the output of P (as in TAUM AVIATION [1] , where the output of Transfer is checked in this way). This approach is particularly useful where it is expected that collectively coherent and useful sub-parts of the output of P can be salvaged by the filter.The effect of this is that P either produces 0 (an (a)-type error), or something more well-formed, so that the likely result of filtering output in this way is to produce a proliferation of type-a errors as P is unable to produce any output that satisfies its goal. There are a number of ways to avoid this, the value of which is that processing successfully performed by P is not wasted, as it would be if P simply failed, producing Ø. The obvious danger of all this is that the 'fall back' output of P may be illegal or unusable for some process that P feeds. We can distinguish three methods for achieving 'output robustness':(i)introduce a fall-back process that will massage the output so that it becomes well-formed according to the goal. This is suitable when the actual output is very close to the desired output (e.g. if P is to output a complete labelled tree, the case where the actual output lacks only one label could be saved by a process which introduces a 'wild card label' that matches on anything, and thus satisfies the goal).(ii) introduce some 'ranking' of successively weaker conditions in the output, so that if output fails one, it may still be passed by another less stringent filter. This would be a natural part of a strategy implementing a version of 'Preference' [9] to make sure that if a process produces a number of inputs, the best of these is output, even if it is less than perfect.(iii) It may be that P fails to produce an output of the desired kind, but that the system that includes P has been set up so that it has some alternative strategies which it is able to employ, using intermediate results of P. The example we have in mind is of a Transfer based MT system (such as EUROTRA) in which analysis aims to produce a semantic representation, as input to normal transfer, but which includes a 'safety net' transfer module employing the syntactic representation that analysis routinely builds and maintains as it is attempting to produce the semantic representation. Here the possibility of subsequent procedures failing by being unable to utilize the 'fallback' output of P is extreme, and this solution incurs a considerable developmental overhead, as alternative processors must be designed to cope with the fall-back output.Of course, a fourth alternative is to simply allow P to produce Ø, and rely on standard (a)-type solutions. Notice that in any case, trapping (b)-type errors is likely to lead to some (a)-type errors. Though the combination of P and P' may accept a superset of the inputs of P, the well-formedness check will mean that P itself sometimes outputs Ø, and it is likely that P and P' together will sometimes produce imperfect output that will cause some later process to fail, producing Ø.Moreover, making P more robust by weakening the well-formedness check on the lines of (i)-(iii) has the same sort of pernicious effect as (a)-type solutions. Again, it is unlikely that P and P' together are simply more correct implementations of T, or that they implement T' which differs from T only in having a wider domain. The most probable effect of making P output robust is that it now implements a version of a theoretical construct T' which differs from T both in domain (since output robust P may well accept a superset of P) and range (since robust P is likely to produce a superset of P).The problem of (c)-type errors is now acute: the effect of increasing robustness has been to ensure that some approximation to superficially correct output is reliably produced. But, of course, there are many cases where no output is to be preferred to one which is superficially well-formed, but actually wrong as a representation of the input.In fact, the situation is somewhat worse, for not only has the number of (c)-type errors increased as processing has been made (a)-and (b)-robust, but introducing (a)-and (b)-robustness has weakened our grip on the notion of correctness in relation to R itself, since the modifications P has undergone in being made robust have meant that it no longer implements T, but T', which may be distinct from, and weaker than T.From this we can draw an immediate and obvious conclusion about the need to distinguish sharply between 'ideal' and 'robust' processing. We have assumed that T is a correct (or the best approximation to a correct) instantiation of R, so that there is simply no point in checking for errors in relation to anything other than T (such a check would have no clear relation to the intuitive ideas about correctness that constitute R). If it is to be worthwhile, then, checking for (c)-type errors requires that we are able to distinguish T from the T' which is implemented by a robust version of P. Theoretically, this is unproblematic. However, in a domain such as MT it will be rather unusually for T and T' to exist separately from P and P' that instantiate them. Thus, the need to separate 'ideal' and 'robust' processing in this context comes down to the need to be able to separate out those aspects of a robust processor that implement 'ideal' T. This will normally mean distinguishing sharply between P and P'. This is worth pointing out, since this distinction is not one that is made in most robust systems.In the final section we discuss some ways in which automatic evaluation of P might be made feasible, and (c)-type errors detected.
null
null
null
In this paper we attempt to develop a general theory of robust processing and explore its consequences for certain kinds of Machine Translation. Specifically, we assume without argument the goal of a general purpose, fully automatic multilingual MT system to be developed within a highly decentralized organizational framework (for example, the European Commission's EUROTRA project*). The acceptance of such a goal influences our approach in a number of ways.First, the requirement that the system be developed in a highly decentralized organizational framework results in the need for a theory which is both logically strong and highly general, abstracting from many details of purely local relevance.Second, accepting the goal of multi-lingual MT means that the process of translation cannot be considered simply as a mapping of strings to strings: one is forced to consider the status of intermediate representations of various kinds.Third, the fact that we consider the issue of robustness at all is a reflection of the difficulty of MT, and the aim of full automation is reflected in our concentration on a theory of robust processing rather than 'developmental robustness'. The general idea of system robustness conflates two quite separate ideas: on the one hand, the idea that systems should be capable of extension and repair by their designers (being, for example, resistant to unforeseen 'ripple effects' under modification). Notice that systems which have only this kind of robustness can never be fully automatic, thus, despite its importance, we will have little to say about this aspect of robustness here. On the other hand, there is the idea that systems should be robust in the sense of capable of dealing with unexpected or deviant data: we will use the terra 'robustness' and related expressions to refer to this.Three kinds of problem give rise to the need for robustness. For any given process or procedure one may encounter: case (a). Illegal inputs, i.e. inputs which have not been foreseen. Notice that from the processing (as opposed to the developmental) point of view, it is irrelevant whether the illegality arises from a deficiency in the input itself or a deficiency in the process, i.e., whether it is the input or the process that requires repair. case (b). Illegal intermediate results. This will occur if some process malfunctions so as to produce deviant output (again the source of this malfunction is irrelevant). It may be that this is not detected until some other process takes this output as input -in which case we have an instance of case (a). However, it may be that in order to produce anything at all some process requires its output to satisfy some condition, so that this is conceptually a separate problem from case (a). case (c). Suppose both input i and output j of some process are legal objects, it nevertheless does not follow that they have been correctly paired by the process. For example, in the case of a parsing process, i may be some sentence and j some representation. The fact that i and j are legal objects for the parsing process and that j is the output of the parser for input i does not guarantee that j is a 'correct' representation of i. Of course, robust processing should be resistant to this kind of malfunctioning also.We regard the problem of (c)-type fragility as the most serious, and most resistant to solution: no existing approach is capable of dealing with it, and we will argue that the natural consequence of introducing solutions to (a)-and (b)-type errors is a proliferation of the more dangerous and insidious (c)-type errors.In this paper we briefly review existing approaches to process robustness in Natural language processing and MT, with some discussion of their deficiencies in relation to our general goals, and the goal of (c)-type robustness in particular, and attempt to develop a partial solution to the problem of (c)-type robustness.There are two distinct issues with respect to (c)-type errors: detection and repair. Clearly, the second presupposes the first, and though one of the approaches we describe yields a method repair, directly, we will have relatively little to say at this time about repair as such.(c)-type errors differ from (a)-and (b)-type errors in an important way: (a)-and (b)-type errors can be detected quite simply by a check on well-formedness with respect to the domain and range of T respectively. (c)-type errors can only be detected with certainty by computing the pairing of elements of the domain and range of T. This is, of course, the task which P itself was designed to do.What is required is an implementation of a relation that pairs items in exactly the same way as T: the obvious candidate is the inverse of T, that is, T -l . We will return to this below, but notice this will only be feasible provided there is known to be a way of implementing T -l which is considered to be reliable.However, we might consider a partial solution derived from a well-known technique in systems theory: insuring against the effect of faulty components in crucial parts of a system by computing the result for a given input by a number of different routes. For our purposes, the method would consist essentially in implementing the same computation in parallel a number of times and using statistical criteria to determine the correctness of the computation. We will call this the 'statistical solution'. (Notice that certain kinds of system architecture make this quite feasible, even given real time constraints.) Clearly, however, while this should significantly improve the chances that output will be correct, it can provide no guarantee.Moreover, the kind of situation we are considering is more complex than that arising given failure of relatively simple pieces of hardware. This is because we have to consider three distinct cases of failure.(1) Where we believe T to be adequate, but expect P will be error prone. (c.f. Problem 1: Correctness of P) In this case the obvious solution would involve implementing T many times, as independent Ps, taking the result that is most frequent.The other two cases arise where we are relatively confident of the implementation, but are concerned with the incompleteness of T (cf Problem 2: Completeness of T). On the face of it, we can proceed in this case by implementing a number of different T's. However, this leads to new problems, as we have already suggested above.(2) If the range of all these Ts coincides, then statistically at least, this method should yield adequate results. Normally, however, it is likely to be difficult to construct distinct T's which have this property, and which at the same time correctly embody R. Nevertheless, this would appear to be a natural way of extending approaches to (a)-type robustness to cope with (c)-type errors.(3) We expect the normal situation to be that the ranges of the different T's are distinct. The problem now is that we have no basis for comparison of the results, and hence no longer any sensible statistical criterion. Notice that the technique of producing robustness in response to (a) and (b) errors virtually guarantee that this situation arises. (Note also that the apparent solution offered by some systems (e.g. GETA [3] ), ranking Ts and accepting the results that conform to the highest valued T subject to some measure of completeness, is evidently not a solution to the problem here, since it offers no guarantee that the output is T-correct with respect to any T). Thus in this case, if we wish to check for (c)-type errors we have no alternative but to implement a process which computes the inverse of T.The statistical solution is attractive because it shifts the emphasis in coping with (c)-type errors from detection to repair, effect of faulty components in crucial parts of a system by computing the result for a given input by a number of different routes. For our purposes, the method would consist essentially in implementing the same computation in parallel a number of times and using statistical criteria to determine the correctness of the computation. We will call this the 'statistical solution'. (Notice that certain kinds of system architecture make this quite feasible, even given real time constraints.) Clearly, however, while this should significantly improve the chances that output will be correct, it can provide no guarantee.Moreover, the kind of situation we are considering is more complex than that arising given failure of relatively simple pieces of hardware. This is because we have to consider three distinct cases of failure.(1) Where we believe T to be adequate, but expect P will be error prone. (c.f. Problem 1: Correctness of P) In this case the obvious solution would involve implementing T many times, as independent Ps, taking the result that is most frequent.The other two cases arise where we are relatively confident of the implementation, but are concerned with the incompleteness of T (cf Problem 2: Completeness of T). On the face of it, we can proceed in this case by implementing a number of different T's. However, this leads to new problems, as we have already suggested above.(2) If the range of all these Ts coincides, then statistically at least, this method should yield adequate results. Normally, however, it is likely to be difficult to construct distinct T's which have this property, and which at the same time correctly embody R. Nevertheless, this would appear to be a natural way of extending approaches to (a)-type robustness to cope with (c)-type errors.(3) We expect the normal situation to be that the ranges of the different T's are distinct. The problem now is that we have no basis for comparison of the results, and hence no longer any sensible statistical criterion. Notice that the technique of producing robustness in response to (a) and (b) errors virtually guarantee that this situation arises. (Note also that the apparent solution offered by some systems (e.g. GETA [3] ), ranking Ts and accepting the results that conform to the highest valued T subject to some measure of completeness, is evidently not a solution to the problem here, since it offers no guarantee that the output is T-correct with respect to to any T). Thus in this case, if we wish to check for (c)-type errors we have no alternative but to implement a process which computes the inverse of T.The statistical solution is attractive because it shifts the emphasis in coping with (c)-type errors from detection to repair, and because they avoid the need to map backwards from output to input. Such solutions are certainly worth further consideration. However, realistically, we expect the normal situation to be as described in (3) , so that it is worthwhile to consider the feasibility 'inverse solutions' involving construction of P -l implementing the inverse of T.The basic method here would be to compute an enumeration of the set of all possible inputs W that could have yielded the actual output, given T, and some hypothetical ideal P which correctly implements it. (Again, this is not unrealistic; certain system architectures would allow forward computation to proceed while this inverse processing is carried out).To make this worthwhile involves two assumptions:1. That P -l terminates in reasonable time. This cannot be guaranteed, but it can be rendered a more reasonable assumption by observing characteristics of the input, and thus restricting W (e.g. restricting the members of W in relation to the length of the input guarantees that W is finite, and for some Ts it may be possible to exploit more interesting characteristics of internal structure).2. That construction of P -l is somehow more straightforward than construction of P, so that P -l is likely to be more reliable than P. In fact this is not implausible for some applications (e.g. consider the case where P is a parser: it is a widely held idea that generators are easier to build than parsers).Granted these assumptions, one simply examines the enumeration for the input if it is present. If it is present, then given that P -l is likely to be more reliable than P, then it is likely that the output of P was T-correct, and hence did not constitute a (c)-type error. At least, the chances of the output of P being correct have been increased.In the nature of things, we will ultimately be lead to the original problems of robustness, but now in connection with P-1. For this reason we cannot foresee any complete solution to problems of robustness generally. What we have seen is that solutions to one sort of fragility are normally only partly successful, leading to error of another kind elsewhere. Clearly, what we have to hope is that each attempt to eliminate a source of error nevertheless leads to a net decrease in the overall number of errors.
Main paper: introduction.: In this paper we attempt to develop a general theory of robust processing and explore its consequences for certain kinds of Machine Translation. Specifically, we assume without argument the goal of a general purpose, fully automatic multilingual MT system to be developed within a highly decentralized organizational framework (for example, the European Commission's EUROTRA project*). The acceptance of such a goal influences our approach in a number of ways.First, the requirement that the system be developed in a highly decentralized organizational framework results in the need for a theory which is both logically strong and highly general, abstracting from many details of purely local relevance.Second, accepting the goal of multi-lingual MT means that the process of translation cannot be considered simply as a mapping of strings to strings: one is forced to consider the status of intermediate representations of various kinds.Third, the fact that we consider the issue of robustness at all is a reflection of the difficulty of MT, and the aim of full automation is reflected in our concentration on a theory of robust processing rather than 'developmental robustness'. The general idea of system robustness conflates two quite separate ideas: on the one hand, the idea that systems should be capable of extension and repair by their designers (being, for example, resistant to unforeseen 'ripple effects' under modification). Notice that systems which have only this kind of robustness can never be fully automatic, thus, despite its importance, we will have little to say about this aspect of robustness here. On the other hand, there is the idea that systems should be robust in the sense of capable of dealing with unexpected or deviant data: we will use the terra 'robustness' and related expressions to refer to this.Three kinds of problem give rise to the need for robustness. For any given process or procedure one may encounter: case (a). Illegal inputs, i.e. inputs which have not been foreseen. Notice that from the processing (as opposed to the developmental) point of view, it is irrelevant whether the illegality arises from a deficiency in the input itself or a deficiency in the process, i.e., whether it is the input or the process that requires repair. case (b). Illegal intermediate results. This will occur if some process malfunctions so as to produce deviant output (again the source of this malfunction is irrelevant). It may be that this is not detected until some other process takes this output as input -in which case we have an instance of case (a). However, it may be that in order to produce anything at all some process requires its output to satisfy some condition, so that this is conceptually a separate problem from case (a). case (c). Suppose both input i and output j of some process are legal objects, it nevertheless does not follow that they have been correctly paired by the process. For example, in the case of a parsing process, i may be some sentence and j some representation. The fact that i and j are legal objects for the parsing process and that j is the output of the parser for input i does not guarantee that j is a 'correct' representation of i. Of course, robust processing should be resistant to this kind of malfunctioning also.We regard the problem of (c)-type fragility as the most serious, and most resistant to solution: no existing approach is capable of dealing with it, and we will argue that the natural consequence of introducing solutions to (a)-and (b)-type errors is a proliferation of the more dangerous and insidious (c)-type errors.In this paper we briefly review existing approaches to process robustness in Natural language processing and MT, with some discussion of their deficiencies in relation to our general goals, and the goal of (c)-type robustness in particular, and attempt to develop a partial solution to the problem of (c)-type robustness. basic notions and background assumptions.: To begin with it is useful to narrow the discussion somewhat by noting a number of ways in which the problem of system fragility in the kind of environment we are concerned with differs from that encountered in Natural Language Processing (NLP) generally. For example, we are not particularly concerned with failures that result from different kinds of mis-spelling, and mis-segmentation in the input, since we imagine that texts submitted for translation will already have been processed for such errors (perhaps automatically), nor are we concerned with dialogue related problems such as highly fragmentary input, interjections, or false starts, though these clearly present serious difficulties in some kinds of NLP (e.g. in front ends to Expert Systems).On the other hand, certain common solutions to problems of fragility are obviously not open to us. In particular, the aim of a general theory for robust MT excludes the use of highly domain specific knowledge such as is exploited by many special purpose NLP systems (cf. Hayes and Mouradian [5] ), and the fact that we are concerned with translation militates against the disregard for input that is characteristic of some robust systems (it is displayed in an extreme form in e.g. PARRY [2] ): it is not enough that an MT system behaves robustly, producing some output, it should produce output that stands as nearly as possible in the 'translation of relation to its inputs. Finally, though we are not discussing the issue of developmental robustness here, we will obviously prefer robust processing to preserve a high degree of transparency -we take it as axiomatic that general purpose MT systems must be capable of extension and repair.From the point of view we adopt, it is possible to regard an MT system as a set of processes implementing relations between representations (input and output texts can be considered representations of themselves). We distinguish three different kinds of relation:(1) The correct, or intended relation R between representations. E.g. the relation 'is a (correct) translation of, which pairs texts in one language with texts in another. We have only pre-theoretical and rather vague ideas about Rs, in virtue of being bi-lingual speakers, or having some intuitive grasp of the semantics of artificial representations.(2) A theoretical construct T that is supposed to embody R.(3) A process P that is supposed to implement T.The need to distinguish R from T and P is obvious: it is possible to perform evaluations of a system only by comparing actual performance of P against ideas about R. However, it is also necessary to distinguish T from P. In some cases T may exist as a separate entity (e.g. if P is a process that implements an explicit grammar of some sort) so that the need to separate evaluation of T (the grammar) and P (the programs) is obvious. However, even when T does not exist as an explicit set of propositions, it is useful to consider it separately, as an interface between pre-theory and process. It is a fact that every actual process implements a theory of inputs and outputs, however implicit, and the existence of such an interface is essential to our presentation. We want to distinguish carefully between evaluation of T (e.g. checking the adequacy of ones representational devices) and evaluation of P (checking how far an implementation delivers representation). We are concerned with automatic approaches to error detection and repair, and we can imagine no automatic method for checking the correctness of a representational device (T). For this reason we want to ignore questions of how far T can be considered a good instantiation of R.Thus, in what follows, we will simply assume that T is both welldefined and a correct embodiment of R (e.g. in the case where T is a theory of translation between L and L', this assumption says that the membership of L, and L' is well-defined and members of L are paired with their correct translations in L'). Realistically, in the context of NLP, the assumption of the correctness of T in relation to R will amount to assuming it to be the most correct available -that all T' distinct from T are in some way less correct -in fact, this assumption is sufficient for our purposes.It will considerably simplify the exposition below if T can be regarded as a function; this can be achieved if we abstract away from the phenomenon of ambiguity (it will not matter if we regard T as a relation between individual representations, or between individual representations and sets of representations which are equivalent). While we think this simplification is essential to the exposition here, it is regrettable, since the inter-relation of ambiguity and robustness is an important matter.Finally, we will assume that hardware and low-level software operate error-free and that P can be guaranteed to terminate for all inputs: there are well known ways to ensure system robustness at this kind of level (e.g. termination is guaranteed by simply restricting allocation of resources to P), so that this assumption seems unproblematic.Given these assumptions the possible sources of error in a system or process P are restricted to two: Problem 1: Correctness of P. P is not a correct implementation of T.One might expect this situation in cases where T is extremely complex, which we consider will be a common situation in NLP and MT -even for domains which are reasonably well understood, theories are extremely complex, and there are severe problems devising implementations of them.2:_ Completeness of_ T. T, while correct, is not complete.We have assumed that T is correct, i.e. that it correctly pairs all items in its domain with items in its range.It is not a consequence of this assumption that the domain of T is co-extensive with the set of actual inputs to P.In fact, in realistic NLP we expect the set of actual inputs regularly to be a strict superset of the domain of T, for non-trivial Ts.Even if T were to include what amounts to a complete grammatical description of the input language this would be so, since we can expect some inputs of only marginal grammaticality, and all languages allow scope for creativity that is under determined by the rule system (e.g. creation of new, derivationally simple terms).In principle, it might turn out that a combination of advances in understanding together with restrictions on input might eliminate both sources of error.It seems reasonable to disregard this possibility, and assume that robust processing will always be necessary.We can now state the possible manifestations of system fragility described in the introduction more concisely: case (a): P(x)=Ø. i.e. P halts producing Ø output for input x.This is the effect of illegal (unforeseen) input. case (b): P(x)=z where z is not a legal output for P according to T. case (c): P(x)=y, where y is a legal output for P according to T, but is not the intended output according to T. i.e. y is in the range of T, but y≠T(x).We should also be precise about the alternative to malfunctioning: 'correct' processing, and in particular processing which avoids (c)-type malfunctions. We will speak of outputs being 'T-correct' when they are the results of such processing.By abstracting away from ambiguity, we are able to consider T to be a function.It does not follow that the inverse of T (that is, T -1) is also a function, since T may be many-toone. This complicates the definition of T-correctness slightly:Given P(x) = y, and a set W such that for all w in W, T(w) = y, then y is T-correct with respect to w iff x is a member of W.When x is not a member of W, there is a (c)-type error.3. Existing approaches.We now present a classification of existing solutions to instances of (a)-and (b)-type fragility. None of these provides a solution to (c)-type fragility.Case-(a) errors: 'input' robustness:Case (a) errors, where P halts without producing any output for input x, have their source in a mismatch between the expectations of P and the data it is presented with. They are the most commonly considered in the literature. There are two basic approaches to making systems 'input robust':(i) to call some alternative process P' to manipulate the data so that it satisfies the expectations of P: for example, the LIFER [6] approach to elliptical input involves attempting to restore the ellipsis, so that the input can be processed by the normal rules. Notice that P' cannot guarantee to do more than make x formally acceptable to P, which will generally lead to (b)-and (c)-type problems, as later processes find inconsistent or incomplete information. The alternative would be fortuitous, and not very likely:(1) that P and P' together simply constitute a more correct implementation of T, hence solving problem 1; (2) that P and P' together implement a theory T' which differs from T only in having a larger domain, hence solving problem 2.Thus, though if it is successful, this strategy will eliminate case (a) errors, case (b) and (c) errors will remain, and are likely to be more wide-spread.(ii) Provide some mechanism for modifying the expectations of P: e.g. by calling some alternative process P' which embodies weaker expectations about data, or by attempting some temporary re-arrangement of P (cf. Kwasny and Sondheimer [7] ), or simply relaxing P's requirements on inputs (as in 'Preference' type approaches, (Wilks [9] )). Variants of this approach are extremely common: (cf. Weischedel and Black [8] , HEARSAY [4] , and in general, systems that favour a bottom up approach to exception processing). The effect of this is to create some new process P' which accepts a superset of the inputs of P. Again, it is unlikely that this will simply yield a more correct implementation of T. It is more likely that P' implements a new theoretical construct T', distinct from T, and with a wider domain.If this is successful, it will eliminate type (a)-errors, but now the existence of type-b and c errors is likely: given that T and T' are different, they may well differ in terms of range, as well as domain, and anything which P' delivers outside the range of T constitutes a (b)-or (c)type error.We return to the case where the ranges of T and T' are known to coincide in section 4.(Notice, incidentally, that if making P robust does involve implementing an alternative to T, then the assumption about the correctness of the theory in relation to R is no longer valid, so that even if our processor is guaranteed to deliver what the theory T' predicts it should, there is no guarantee that this is what is really intended (i.e. what is correct for R). Of course, it might, in principle turn out that T' actually better approximates to R, and performs better by accident -such accidents are extremely improbable, and we think we can disregard them).Case-(b) errors, where the output of P is ill-formed according to T can be trapped straightforwardly, by imposing a 'goal filter' or well-formedness check on the output of P (as in TAUM AVIATION [1] , where the output of Transfer is checked in this way). This approach is particularly useful where it is expected that collectively coherent and useful sub-parts of the output of P can be salvaged by the filter.The effect of this is that P either produces 0 (an (a)-type error), or something more well-formed, so that the likely result of filtering output in this way is to produce a proliferation of type-a errors as P is unable to produce any output that satisfies its goal. There are a number of ways to avoid this, the value of which is that processing successfully performed by P is not wasted, as it would be if P simply failed, producing Ø. The obvious danger of all this is that the 'fall back' output of P may be illegal or unusable for some process that P feeds. We can distinguish three methods for achieving 'output robustness':(i)introduce a fall-back process that will massage the output so that it becomes well-formed according to the goal. This is suitable when the actual output is very close to the desired output (e.g. if P is to output a complete labelled tree, the case where the actual output lacks only one label could be saved by a process which introduces a 'wild card label' that matches on anything, and thus satisfies the goal).(ii) introduce some 'ranking' of successively weaker conditions in the output, so that if output fails one, it may still be passed by another less stringent filter. This would be a natural part of a strategy implementing a version of 'Preference' [9] to make sure that if a process produces a number of inputs, the best of these is output, even if it is less than perfect.(iii) It may be that P fails to produce an output of the desired kind, but that the system that includes P has been set up so that it has some alternative strategies which it is able to employ, using intermediate results of P. The example we have in mind is of a Transfer based MT system (such as EUROTRA) in which analysis aims to produce a semantic representation, as input to normal transfer, but which includes a 'safety net' transfer module employing the syntactic representation that analysis routinely builds and maintains as it is attempting to produce the semantic representation. Here the possibility of subsequent procedures failing by being unable to utilize the 'fallback' output of P is extreme, and this solution incurs a considerable developmental overhead, as alternative processors must be designed to cope with the fall-back output.Of course, a fourth alternative is to simply allow P to produce Ø, and rely on standard (a)-type solutions. Notice that in any case, trapping (b)-type errors is likely to lead to some (a)-type errors. Though the combination of P and P' may accept a superset of the inputs of P, the well-formedness check will mean that P itself sometimes outputs Ø, and it is likely that P and P' together will sometimes produce imperfect output that will cause some later process to fail, producing Ø.Moreover, making P more robust by weakening the well-formedness check on the lines of (i)-(iii) has the same sort of pernicious effect as (a)-type solutions. Again, it is unlikely that P and P' together are simply more correct implementations of T, or that they implement T' which differs from T only in having a wider domain. The most probable effect of making P output robust is that it now implements a version of a theoretical construct T' which differs from T both in domain (since output robust P may well accept a superset of P) and range (since robust P is likely to produce a superset of P).The problem of (c)-type errors is now acute: the effect of increasing robustness has been to ensure that some approximation to superficially correct output is reliably produced. But, of course, there are many cases where no output is to be preferred to one which is superficially well-formed, but actually wrong as a representation of the input.In fact, the situation is somewhat worse, for not only has the number of (c)-type errors increased as processing has been made (a)-and (b)-robust, but introducing (a)-and (b)-robustness has weakened our grip on the notion of correctness in relation to R itself, since the modifications P has undergone in being made robust have meant that it no longer implements T, but T', which may be distinct from, and weaker than T.From this we can draw an immediate and obvious conclusion about the need to distinguish sharply between 'ideal' and 'robust' processing. We have assumed that T is a correct (or the best approximation to a correct) instantiation of R, so that there is simply no point in checking for errors in relation to anything other than T (such a check would have no clear relation to the intuitive ideas about correctness that constitute R). If it is to be worthwhile, then, checking for (c)-type errors requires that we are able to distinguish T from the T' which is implemented by a robust version of P. Theoretically, this is unproblematic. However, in a domain such as MT it will be rather unusually for T and T' to exist separately from P and P' that instantiate them. Thus, the need to separate 'ideal' and 'robust' processing in this context comes down to the need to be able to separate out those aspects of a robust processor that implement 'ideal' T. This will normally mean distinguishing sharply between P and P'. This is worth pointing out, since this distinction is not one that is made in most robust systems.In the final section we discuss some ways in which automatic evaluation of P might be made feasible, and (c)-type errors detected. two approaches to (c)-type robustness: There are two distinct issues with respect to (c)-type errors: detection and repair. Clearly, the second presupposes the first, and though one of the approaches we describe yields a method repair, directly, we will have relatively little to say at this time about repair as such.(c)-type errors differ from (a)-and (b)-type errors in an important way: (a)-and (b)-type errors can be detected quite simply by a check on well-formedness with respect to the domain and range of T respectively. (c)-type errors can only be detected with certainty by computing the pairing of elements of the domain and range of T. This is, of course, the task which P itself was designed to do.What is required is an implementation of a relation that pairs items in exactly the same way as T: the obvious candidate is the inverse of T, that is, T -l . We will return to this below, but notice this will only be feasible provided there is known to be a way of implementing T -l which is considered to be reliable.However, we might consider a partial solution derived from a well-known technique in systems theory: insuring against the effect of faulty components in crucial parts of a system by computing the result for a given input by a number of different routes. For our purposes, the method would consist essentially in implementing the same computation in parallel a number of times and using statistical criteria to determine the correctness of the computation. We will call this the 'statistical solution'. (Notice that certain kinds of system architecture make this quite feasible, even given real time constraints.) Clearly, however, while this should significantly improve the chances that output will be correct, it can provide no guarantee.Moreover, the kind of situation we are considering is more complex than that arising given failure of relatively simple pieces of hardware. This is because we have to consider three distinct cases of failure.(1) Where we believe T to be adequate, but expect P will be error prone. (c.f. Problem 1: Correctness of P) In this case the obvious solution would involve implementing T many times, as independent Ps, taking the result that is most frequent.The other two cases arise where we are relatively confident of the implementation, but are concerned with the incompleteness of T (cf Problem 2: Completeness of T). On the face of it, we can proceed in this case by implementing a number of different T's. However, this leads to new problems, as we have already suggested above.(2) If the range of all these Ts coincides, then statistically at least, this method should yield adequate results. Normally, however, it is likely to be difficult to construct distinct T's which have this property, and which at the same time correctly embody R. Nevertheless, this would appear to be a natural way of extending approaches to (a)-type robustness to cope with (c)-type errors.(3) We expect the normal situation to be that the ranges of the different T's are distinct. The problem now is that we have no basis for comparison of the results, and hence no longer any sensible statistical criterion. Notice that the technique of producing robustness in response to (a) and (b) errors virtually guarantee that this situation arises. (Note also that the apparent solution offered by some systems (e.g. GETA [3] ), ranking Ts and accepting the results that conform to the highest valued T subject to some measure of completeness, is evidently not a solution to the problem here, since it offers no guarantee that the output is T-correct with respect to any T). Thus in this case, if we wish to check for (c)-type errors we have no alternative but to implement a process which computes the inverse of T.The statistical solution is attractive because it shifts the emphasis in coping with (c)-type errors from detection to repair, effect of faulty components in crucial parts of a system by computing the result for a given input by a number of different routes. For our purposes, the method would consist essentially in implementing the same computation in parallel a number of times and using statistical criteria to determine the correctness of the computation. We will call this the 'statistical solution'. (Notice that certain kinds of system architecture make this quite feasible, even given real time constraints.) Clearly, however, while this should significantly improve the chances that output will be correct, it can provide no guarantee.Moreover, the kind of situation we are considering is more complex than that arising given failure of relatively simple pieces of hardware. This is because we have to consider three distinct cases of failure.(1) Where we believe T to be adequate, but expect P will be error prone. (c.f. Problem 1: Correctness of P) In this case the obvious solution would involve implementing T many times, as independent Ps, taking the result that is most frequent.The other two cases arise where we are relatively confident of the implementation, but are concerned with the incompleteness of T (cf Problem 2: Completeness of T). On the face of it, we can proceed in this case by implementing a number of different T's. However, this leads to new problems, as we have already suggested above.(2) If the range of all these Ts coincides, then statistically at least, this method should yield adequate results. Normally, however, it is likely to be difficult to construct distinct T's which have this property, and which at the same time correctly embody R. Nevertheless, this would appear to be a natural way of extending approaches to (a)-type robustness to cope with (c)-type errors.(3) We expect the normal situation to be that the ranges of the different T's are distinct. The problem now is that we have no basis for comparison of the results, and hence no longer any sensible statistical criterion. Notice that the technique of producing robustness in response to (a) and (b) errors virtually guarantee that this situation arises. (Note also that the apparent solution offered by some systems (e.g. GETA [3] ), ranking Ts and accepting the results that conform to the highest valued T subject to some measure of completeness, is evidently not a solution to the problem here, since it offers no guarantee that the output is T-correct with respect to to any T). Thus in this case, if we wish to check for (c)-type errors we have no alternative but to implement a process which computes the inverse of T.The statistical solution is attractive because it shifts the emphasis in coping with (c)-type errors from detection to repair, and because they avoid the need to map backwards from output to input. Such solutions are certainly worth further consideration. However, realistically, we expect the normal situation to be as described in (3) , so that it is worthwhile to consider the feasibility 'inverse solutions' involving construction of P -l implementing the inverse of T.The basic method here would be to compute an enumeration of the set of all possible inputs W that could have yielded the actual output, given T, and some hypothetical ideal P which correctly implements it. (Again, this is not unrealistic; certain system architectures would allow forward computation to proceed while this inverse processing is carried out).To make this worthwhile involves two assumptions:1. That P -l terminates in reasonable time. This cannot be guaranteed, but it can be rendered a more reasonable assumption by observing characteristics of the input, and thus restricting W (e.g. restricting the members of W in relation to the length of the input guarantees that W is finite, and for some Ts it may be possible to exploit more interesting characteristics of internal structure).2. That construction of P -l is somehow more straightforward than construction of P, so that P -l is likely to be more reliable than P. In fact this is not implausible for some applications (e.g. consider the case where P is a parser: it is a widely held idea that generators are easier to build than parsers).Granted these assumptions, one simply examines the enumeration for the input if it is present. If it is present, then given that P -l is likely to be more reliable than P, then it is likely that the output of P was T-correct, and hence did not constitute a (c)-type error. At least, the chances of the output of P being correct have been increased.In the nature of things, we will ultimately be lead to the original problems of robustness, but now in connection with P-1. For this reason we cannot foresee any complete solution to problems of robustness generally. What we have seen is that solutions to one sort of fragility are normally only partly successful, leading to error of another kind elsewhere. Clearly, what we have to hope is that each attempt to eliminate a source of error nevertheless leads to a net decrease in the overall number of errors. : We distinguish three sources of malfunction: (a) deviant inputs, (b) deviant outputs, and (c) deviant pairings of input and output, and describe the assumptions that guide our discussion (sections 1 and 2). We classify existing approaches to (a)-and (b)-robustness, noting that not only do such approaches fail to provide a solution to (c)-type problems, but that the natural consequence of these solutions is to make (c)-type malfunctions harder to detect (section 3) In the final section (4) we outline possible solutions to (c)-type malfunctions. Appendix:
null
null
null
null
{ "paperhash": [ "kwasny|relaxation_techniques_for_parsing_grammatically_ill-formed_input_in_natural_language_understanding_systems", "hayes|flexible_parsing", "erman|hearsay-ii._tutorial_introduction_and_retrospective_view", "hendrix|human_engineering_for_applied_natural_language_processing" ], "title": [ "Relaxation Techniques for Parsing Grammatically Ill-Formed Input in Natural Language Understanding Systems", "Flexible Parsing", "Hearsay-II. Tutorial Introduction and Retrospective View", "Human Engineering for Applied Natural Language Processing" ], "abstract": [ "This paper investigates several language phenomena either considered deviant by linguistic standards or insufficiently addressed by existing approaches. These include co-occurrence violations, some forms of ellipsis and extraneous forms, and conjunction. Relaxation techniques for their treatment in Natural Language Understanding Systems are discussed. These techniques, developed within the Augmented Transition Network (ATN) model, are shown to be adequate to handle many of these cases.", "When people use natural language in natural settings, they often use it ungrammatically, missing out or repeating words, breaking-off and restarting, speaking in fragments, etc., Their human listeners are usually able to cope with these deviations with little difficulty. If a computer system wishes to accept natural language input from its users on a routine basis, it must display a similar indifference. In this paper, we outline a set of parsing flexibilities that such a system should provide. We go on to describe FlexP. a bottom-up pattern-matching parser that we have designed and implemented to provide these flexibilities for restricted natural language input to a limited-domain computer system.", "Abstract : The Hearsay-2 system, developed at CMU as part of the five-year ARPA speech-understanding project, was successfully demonstrated at the end of that project in September 1976. This report reprints two Hearsay II papers which describe and discuss that version of the system: The 'Hearsay-2 System: A Tutorial', and 'A Retrospective View of the Hearsay-2 Architecture'. The first paper presents a short introduction to the general Hearsay-2 structure and describes the September 1976 configuration of knowledge-sources; it includes a detailed description of an utterance being recognized. The second paper discusses the general Hearsay-2 architecture and some of the crucial problems encountered in applying that architecture to the problem of speech understanding.", "Human engineering features for enhancing the usabil ity of practical natural language systems a l re described. Such features include spelling correction, processing of incomplete (ell ipt ic-~I) input?, jntfrrog-t ior of th p underlying language definition through English oueries, and ?r rbil.it y for casual users to extrnd the language accepted by the system through the-use of synonyms ana peraphrases. All of 1 h* features described are incorporated in LJFER,-\"n r ppl ieat ions-orj e nlf d system for 1 creating natural language j nterfaees between computer programs and casual USERS LJFER's methods for r<\"v] izir? the mroe complex human enginering features ? re presented. 1 INTRODUCTION This pape r depcribes aspect r of a n applieations-oriented system for creating natural langruage interfaces between computer software and Casual users. Like the underlying researen itself, the paper is focused on the human engineering involved in designing practical rnd comfortable interfaces. This focus has lead to the investigation of some generally neglected facets of language processing, including the processing of Ireomplfte inputs, the ability to resume parsing after recovering from spelling errors and the ability for naive users to input English stat.emert s at run time that, extend and person-lize the language accepted by the system. The implementation of these features in a convenient package and their integration with other human engineering features are discussed. There has been mounting evidence that the current state of the art in natural language processing, although still relatively primitive, is sufficient for dealing with some very real problems. For example, Brown and Burton (1975) have developed a usable system for computer assisted instruction, and a number of language systems have been developed for interfacing to data bases, including the REL system developed by Thompson and Thompson (1975), the LUNAR system of Woods et al. (1972), and the PLANES system ol Walt7 (1975). The SIGART newsletter for February, 1977, contains a collection cf 5? short overviews of research efforts in the general area of natural language interfaces. Tnere has rise been a growing demand for application systems. At SRi's Artificial Irtellugene Center alone, many programs are ripe for the addition of language capabilities, Including systems for data base accessing, industrial automation, automatic programming, deduct ior, and judgmental reasoning. The appeal cf these systems to builders ana users .-'like is greatly enhanced when they are able to accept natural language inputs. B. The LIFER SYSTEM To add …" ], "authors": [ { "name": [ "S. Kwasny", "N. Sondheimer" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "P. Hayes", "G. Mouradian" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "L. Erman", "V. Lesser" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Hendrix" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null ], "s2_corpus_id": [ "181820", "11007680", "59900183", "5436772" ], "intents": [ [ "background" ], [], [ "background" ], [ "background" ] ], "isInfluential": [ false, false, false, false ] }
null
492
0
null
null
null
null
null
null
null
null
6f4b27e8d1f5bc5f6c318632e6af3cc923ac6668
3213250
null
Machine translation from designers to users: management problems and solutions
Today Machine Translation (MT) systems are at best unique combinations of mathematical, linguistic and algorithmic theories, and of the absence of any theory of translation.
{ "name": [ "Thou{\\^\\i}n, Benoit" ], "affiliation": [ null ] }
null
null
Proceedings of the International Conference on Methodology and Techniques of Machine Translation: Processing from words to language
1984-02-01
0
1
null
In most instances, be it or not because of the complexity of the theories and models involved, managers and translators have been kept out, or kept themselves out, of MT systems design and development. However, they are the ones who have to use and manage such systems (if they ever become operational), cope with their development and operational costs and, with the help of such strange tools, achieve objectives of better communication.Clearly, since designers and users of operational MT systems are quite separate groups, it is not less than a transfer of technology that must occur for managers and translators, who are MT-wise developing professionals, to inherit the so-called achievements of developed computational linguistics theories.Most of those technology transfer problems resemble the ones managers are faced with when a new computerized information system is implemented in its operational and user environment: system and acceptance testing, possible strategies of implementations, conversion from old (manual) to new system, training, resistance to change, operation per se, including file and data-base maintenance, on-going evaluation and improvement of the system, etc. The paper will briefly overview these problems as they arise in an MT environment.Problems are interesting, but solutions are even more so. With examples mainly from the North-American experience, the paper will discuss original strategies that render easier the access of users to MT technology: early involvement of users in the development process, incorporation into existing operation environment, incorporation into a total document design and production system, total service by a translation firm making the system fully transparent to end-users, layered software structure, micro-computer implementation, direct connection and use through existing computer networks, and more ideas that will have emerged or been implemented by the time of the Conference.
null
null
null
null
Main paper: : In most instances, be it or not because of the complexity of the theories and models involved, managers and translators have been kept out, or kept themselves out, of MT systems design and development. However, they are the ones who have to use and manage such systems (if they ever become operational), cope with their development and operational costs and, with the help of such strange tools, achieve objectives of better communication.Clearly, since designers and users of operational MT systems are quite separate groups, it is not less than a transfer of technology that must occur for managers and translators, who are MT-wise developing professionals, to inherit the so-called achievements of developed computational linguistics theories.Most of those technology transfer problems resemble the ones managers are faced with when a new computerized information system is implemented in its operational and user environment: system and acceptance testing, possible strategies of implementations, conversion from old (manual) to new system, training, resistance to change, operation per se, including file and data-base maintenance, on-going evaluation and improvement of the system, etc. The paper will briefly overview these problems as they arise in an MT environment.Problems are interesting, but solutions are even more so. With examples mainly from the North-American experience, the paper will discuss original strategies that render easier the access of users to MT technology: early involvement of users in the development process, incorporation into existing operation environment, incorporation into a total document design and production system, total service by a translation firm making the system fully transparent to end-users, layered software structure, micro-computer implementation, direct connection and use through existing computer networks, and more ideas that will have emerged or been implemented by the time of the Conference. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
492
0.002033
null
null
null
null
null
null
null
null
d1549f3502302ee8bdecefdfddd18581d237b0ab
13020131
null
The generation of {C}hinese sentences from semantic representations of {E}nglish sentences
The paper describes the CASSEX package, a parser which takes as input English sentences and produces semantic representations of them, and gives an account of the generation procedure which translates these semantic representations into Chinese sentences.
{ "name": [ "Huang, Xiuming" ], "affiliation": [ null ] }
null
null
Proceedings of the International Conference on Methodology and Techniques of Machine Translation: Processing from words to language
1984-02-01
14
4
null
A Natural Language (NL) generator can be a system on its own right, as is (Meehan 76)'s TALE_SPIN which generates stories. More usually, however, a generator is part of a larger system, which generates surface text from an intermediate data structure produced by another component of the system, the analyser.The generation component of a NL system plays a twofold role: firstly, it tests whether or not the output of the analysis component is correct, thus providing a kind of feedback to the analyser writer. For instance, (Goldman 75)'s generator BABEL detects that in the PARAPHRASE MODE, (Schank 75)'s conceptual analyser MARGIE fails to find the "reader" of the book:INPUT: Reading the book reminded Rita to loan the book to Bill. OUTPUT: Rita remembered to give the book to Bill and she expects him to return it to her because someone read the book.Secondly, if the analysis output is correct, it tests whether or not the representation is good, in terms of the cost and efficiency involved in getting the final result usable to the user (inferences, paraphrases, summaries, answers, or translations, depending on the purpose of the system). Therefore, although generation has "traditionally been the poor relation in NL work" (Cater 81, p.30), a good generator is obviously a necessity to all NL workers.For generating surface text from an intermediate data structure, we can either employ a connected body of grammar rules, most often an ATN generation grammar (Goldman 75, Simmons and Slocum 72, and Burton 76), sometimes an ATN for both analysing and generation (Shapiro 82 ); or we can use a set of functions or specialists (Boguraev 79, Cater 81).The generation procedure described in this paper takes the latter approach of using a set of functions because it is more straightforward and more economical to implement (you don't need another interpreter to run the generation ATN, for instance). Used in conjunction with the CASSEX package, an English sentence analyser, the generator produces good quality Chinese translations for a group of English sentences all of which contain the conjunction "and". The analyser and the generator comprise a prototype English-Chinese Machine Translation (MT) system. In this paper I will review the CASSEX package first, then give a description of the generation procedure.The CASSEX package
null
The CASSEX package is a parser developed from (Boguraev 79)'s work, a system based on ATN grammars (Woods 73) and Preference Semantics (Wilks 75 ). Boguraev's major aim was to resolve linguistic ambiguities, either lexical or structural in individual sentences. The resolution of ambiguities is shown by generating paraphrases for input sentences. Referential ambiguities, as well as ambiguities caused by conjunctions, were not taken into consideration in his system.overall design of Boguraev's system bears a strong resemblance to that of Winograd (1972) . "The analyser ... seeks to use strong semantic judgment within the framework supplied by syntactically-driven parsing" (Boguraev 79, p. 0.2). Semantics routines (NPBUILD and SBUILD) are called after the system's syntax parser (an ATN) recognizes a noun phrase, relative clause, complement or a complete sentence. They fulfill two tasks:1. Structurally, constructing for every input sentence one or more semantic representation(s) which is a dependency tree with verb as the most important node and case slots as its daughters (discussed in more detail in Section 1.4).2. Judgementally, ruling out ill-formed semantic structures which blocks syntactically valid paths. In other words, the semantic routines confirm or block the syntactic paths of the parser -they never drive the parser (i.e. never suggest a particular syntactic path).In the following sub-sections we will look at some of the system's features.The Resolution of Word-sense Ambiguities Word-sense ambiguities are resolved by the semantic routines. For instance, the sentence 1The green crook kicked the ball could have sixteen possible interpretations, if we assume four word-senses for "green" ("green-coloured", "inexperienced", "angry", and "unripe"), two for "crook" ("shepherd stick" and "villain" and two for "ball" ("a spherical object for playing with" and "a social event for dancing"); however, NPBUILD delivers only two interpretations of "the green crook" for later processing, the two corresponding to "the inexperienced villain" and "the green-coloured shepherd stick"; then, after two readings of "the ball" are built, SBUILD is called and only one interpretation for the sentence is constructed, which is valid both syntactically and semantically, and reads to the effect that "The inexperienced villain kicked the spherical object".Apart from tackling the problem of attaching prepositional phrases to appropriate constituents (mainly noun phrases or verbs), Boguraev attempts to handle all the ways in which prepositions can occur in a sentence. These are as:the particles of particled verbs, such as "away" in "throw away"; in semi-idiomatic expressions like "green with envy" or where with a particular verb different prepositions impose different meanings on the verb, or express a finer distinction of meaning, e.g. "aim at" vs "aim for"; in obligatory cases, e.g. "look at", "look for", etc.in optional cases, e.g. "go to the theatre with somebody", "rise with the sun", etc.Boguraev's dictionary design allows for the first three types (these are not yet implemented in the CASSEX package); the fourth is handled by preplates, an adaptation of (Wilks 75)'s paraplates.The preplates allow not only for the modification of verbs(as paraplates did) but also nouns.The structure of preplates is the same for both verbs and nouns.The following is the preplates for "with"*:((*ENT ATTRIBUTE *INAN) WITH1) ((MOVE INSTRUMENT THING) WITH2) ((NOTHAVE MANNER *MAR) WITH3) ((STRIK INSTRUMENT *INST) WITH4) ((CHANGE INSTRUMENT *INST) WITH5) ((CAUSE INSTRUMENT *INST) WITH6) (((SEE SENSE) INSTRUMENT (SEE THING)) WITH7) ((*DO MANNER *MAN) WITH8) ((*HUM ACCOMPANIMENT *HUM) WITH9) ((*DO ACCOMPANIMENT *HUM) WITH10)The actual preplate contains three elements.The first is the * "WITH1", "WITH2", etc, are attached to the original preplates in Boguraev's system so as to meet the need of generating Chinese in later stage. "With" appearing in different preplates can have different equivalent in Chinese.In the following text, when talking about preplates, we will mean the actual preplate triples.preferred semantic category of whatever constituent is being modified (a verb or a noun phrase); the second is the case relation between verb (or NP) and the postmodifying PP; the third is the required semantic category of the head noun of the postmodifying PP.To show how preplates works in attaching PPs, consider the sentence(2) I hit the man with the hammer.The PP, or rather the head of the PP, is "hammer". Its head primitive is INST. The PP can either modify the verb "hit", whose head primitive is STRIK, or the NP "the man", whose head primitive is MAN.Two of the above preplates match. Firstly, the preplate (*ENT ATTRIBUTE *INAN) because MAN ("the man") is an *ENT and INST ("the hammer") is INAN. Hence, the PP is tied to the NP: "the hammer" is an ATTRIBUTE of "the man". The sentence could be paraphrased as "I hit the man who had the hammer". Secondly, (STRIK INSTRUMENT INST) because the head primitives of "hit" and "hammer" are STRIK and INST. The PP is tied to the verb; "the hammer" has an INSTRUMENT relation to the verb. According to this case relation and PP attachment, the sentence could be paraphrased as "I hit the man with the hammer that I had".As was mentioned earlier, the semantic representations delivered by the CASSEX package are dependency trees with verbs as the most important nodes and case slots as their daughters. The representation for sentence (1) "The green crook kicked the ball" is as follows:(CLAUSE (TYPE NIL) (QUERY NIL) (TNS PAST) (ASPECT NIL) (MODALITY NIL) (NEG NIL) (V (KICK ((*ANI SUBJ) ((*PHYSOB OBJE) ((THIS (MAN PART)) INST) STRIK)) (OBJECT ((BALL1 (NOTFLOW THING)) (NUMBER SINGLE) (QUANTIFIER SG) (DETERMINER ((DET1 ONE))))) (AGENT ((CROOK1(((NOTGOOD ACT)OBJE)DO) (SUBJ MAN)) (STATE (GREEN4 ((MAN POSS)(((NOTMUCH (TRUE THINK)) (SUBJ KIND)))))) (NUMBER SINGLE) (QUANTIFIER SG) (DETERMINER ((DET1 ONE))))))))These representations, as we can see above, clearly show the syntactic structure and case relations between word-senses within constituents and between constituents. The surface sentence, together with the word order, however, has been lost: we don't to carry it along, like many MT system do (e.g. Liu 81), because dependency trees provide enough information for generating Chinese.The major improvement of the CASSEX package over Boguraev's system is its ability to process conjunctions. In order to achieve this, grammars specifically designed for conjunctions have been incorporated into the system (see Huang 83 for detail). The CASSEX package deals with sentences containing Gapping, Right Node Raising or Reduced Conjunction, as well as the common cases of "and" conjunction. As for the representation of conjunctions, I follow (Ross 67)'s line, treating them as sisters of the conjuncts. The following are two examples.(3)The man with the telescope and the umbrella kicked the ball.The man kicked the ball and the woman threw the ball.The GeneratorThe generation procedure in Boguraev's system is used for providing paraphrases of the original input sentences. It contains three main steps:Selection of the main verb from a set of verbs synonymous with the verb-sense in the semantic representation given by the analyser, and selection of the rest of the target language words (here the target language is English). This step reduces the number of possible output verb synonyms to just one.Definition of the syntactico-semantic relationships. This is realised by the production of an environment network which contains both syntactic and semantic information relevant to the contextual environment (i.e., the information stored in the Wilksian word-sense formula) of the main verb.Actual output of the generated sentence. This phase makes extensive use of the target language dictionary and grammar rules and makes sure that the generated sentence is a syntactically well-formed string of words.The generator works impressively, producing well-formed paraphrases for many ambiguous sentences.The Chinese Generator Boguraev's generator doesn't suit our purpose very well, however, for several reasons. First, it was written for paraphrasing in English, hence its verb-centred nature (emphasis on main verb selection; the production of the environment network around the verb). In Chinese, the verb is less important (you can have sentences without verbs at all), while word order plays a vital role. Second, it is unable to handle coordinate constructions. Last but not least, it could have been written in a more concise and more straightforward way (at least for the purpose of generating Chinese).Our generator is composed of a set of LISP functions listed below:GENERATE GEN_SENTENCE GEN_CLAUSE GEN_STN_HEAD GEN_SUBJECT GEN_VERB GEN_OBJECT GEN_INDOBJ GEN_DOBJ GEN_MOBJ* GEN_POST_VERB_MODThe top one, GENERATE, takes as its argument a semantic representation and returns as output a Chinese sentence. It sets a global variable STN_SUBJ for later use (conjunction reduction), and calls a function STN_TAIL to get the appropriate sentence ending punctuation.The main function GEN_SENTENCE is called within GENERATE. It checks whether there is a conjunction at clause level; if there is, it calls GEN_SENTENCE recursively to process the conjuncts one by one (each conjunct may itself be comprised of a conjunction and two or more clause-conjuncts). Then we have the basic clause constructor, GEN__CLAUSE, which outputs single clauses. We decompose GEN_CLAUSE into specialists for constructing the major constituents of the clause: GEN_SUBJ, GEN_VERB, GEN_OBJ and GEN_POST_VERB_MOD. The building blocks needed for those specialists (i.e., noun phrases, preposition phrases, adjective phrases, etc.) are supplied by functions GEN_NP, GEN_PP, and GEN_ADJP.The Chinese language is basically an SVO language, though there are cases where the pattern SOV or OSV or even OVS occurs. We can rewrite any Chinese sentence in an SVO pattern while maintaining the fundamental meaning structure of the sentence. A text containing such sentences may be boring to read, but the economy achieved within CASSEX by having only one sentence pattern is much more important to us. Therefore, in our generation procedure, a uniform pattern SVO is assumed. This determines the definition of the function GEN_CLAUSE: The function GEN_STN_HEAD returns any adverbial indicating time (e.g., the Chinese equivalents of "yesterday", "in 1983", etc.), working on the case string TIME_LOCATION_STR. GEN_SUBJ works on AGENT_STR; GEN_INDOBJ on RECIPIENT_STR; GEN_DOBJ on OBJECT_STR, and so on. Each of these functions check the occurrences of conjunctions, premodifiers and postmodifiers and produce noun-phrases accordingly. The function GEN VERB returns the main verb together with adverbials indicating PLACE_LOCATION, REASON, MANNER, or INSTRUMENT in the order as listed above; all of them precede the verb. This function produces time marker(s) as well; there are five of them in Chinese: LE, ZUO, GUO, JIAN and ZAI. Time marking in Chinese is far less strict than in English (very often additional means are employed to indicate time. A detailed contrastive description of time marking in Chinese and in English is impossible here, though). The function GEN_POST_VERB_MOD takes RESULT_STR or GOAL_STR, and returns adverbials (or adverbial clauses) indicating the result or the goal of the action the verb describes (e.g., "to kill Mary" in "John made a gun to kill Mary").(In most cases, each sense of an English word (as defined in CASSEX's dictionary) has a single Chinese equivalent (a surface Chinese word). Sometimes one English sense has more than one Chinese equivalent, depending on the context. For instance, "wear" in the sense of "to carry or have (a garment, etc.) on one's person as * In the notation of dependency grammar we adapt, a major constituent of a given sentence is a constituent immediately dominated by the main verb of the sentence. clothing, ornament, etc." should be translated as "chuan" in "wear clothes, shoes, stockings, etc"; "daih"* in "wear a hat, jewels, glasses, etc."; and "daa" in "wear a tie". I plan to resolve this multi-choice problem by having extra semantic primitives providing finer word-sense discrimination in the dictionary so that, in the semantic representation produced by the analyser, each word-sense will have just one Chinese equivalent. Then, when generating Chinese words, we just extract those equivalents from the bilingual dictionary where each entry is headed by an English word-sense instead of a word.Conjunction Reduction (Ross 67) defines Conjunction Reduction as follows (p.97):We propose a rule of Conjunction Reduction which Chomsky-adjoins to the right or the left of the coordinate node a copy of some constituent which occurs in all conjuncts on a right or left branch, respectively, and then deletes the original nodes.The semantic representations delivered by the CASSEX package are structures with the deleted constituents recovered. For instance, the representation produced for the sentence 5The man kicked and threw the ball. In the generation stage, in order to get well-formed Chinese sentences, we must apply the Conjunction Reduction rule. Only forward deletion of the subject in a conjoined clause and of the attribute in a conjoined NP is obligatory in Chinese (i.e., we only Chomsky-adjoin to the left of the coordinate node a copy of the repeated constituent before deleting the original nodes). This is implemented in our generator so that the output for (5) is RENX TIX LE QIUX, REN LE QIUX. man kick PARTICLE ball throw * I use letters to indicate the four tones for Chinese characters: zero -1st tone; x -2nd tone; repetition of the first letter of the vowel -3rd tone; and h -4th tone. Examples: MA, MAX, MAA, MAH.
null
The CASSEX package and the generator are written in RUTGERS-UCI LISP and implemented on the University of Essex's PDP-10 computer. A couple of dozen of English sentences, all of them containing the conjunction "and" and involving Gapping or Right Node Raising as well as the common cases of coordination, are tested with the program and good quality Chinese sentences are generated (see Appendix). The project is still in the experiment stage, however. More work needs to be done before it becomes a practical English-Chinese MT system.
Main paper: an outline of boguraev's system: The CASSEX package is a parser developed from (Boguraev 79)'s work, a system based on ATN grammars (Woods 73) and Preference Semantics (Wilks 75 ). Boguraev's major aim was to resolve linguistic ambiguities, either lexical or structural in individual sentences. The resolution of ambiguities is shown by generating paraphrases for input sentences. Referential ambiguities, as well as ambiguities caused by conjunctions, were not taken into consideration in his system.overall design of Boguraev's system bears a strong resemblance to that of Winograd (1972) . "The analyser ... seeks to use strong semantic judgment within the framework supplied by syntactically-driven parsing" (Boguraev 79, p. 0.2). Semantics routines (NPBUILD and SBUILD) are called after the system's syntax parser (an ATN) recognizes a noun phrase, relative clause, complement or a complete sentence. They fulfill two tasks:1. Structurally, constructing for every input sentence one or more semantic representation(s) which is a dependency tree with verb as the most important node and case slots as its daughters (discussed in more detail in Section 1.4).2. Judgementally, ruling out ill-formed semantic structures which blocks syntactically valid paths. In other words, the semantic routines confirm or block the syntactic paths of the parser -they never drive the parser (i.e. never suggest a particular syntactic path).In the following sub-sections we will look at some of the system's features.The Resolution of Word-sense Ambiguities Word-sense ambiguities are resolved by the semantic routines. For instance, the sentence 1The green crook kicked the ball could have sixteen possible interpretations, if we assume four word-senses for "green" ("green-coloured", "inexperienced", "angry", and "unripe"), two for "crook" ("shepherd stick" and "villain" and two for "ball" ("a spherical object for playing with" and "a social event for dancing"); however, NPBUILD delivers only two interpretations of "the green crook" for later processing, the two corresponding to "the inexperienced villain" and "the green-coloured shepherd stick"; then, after two readings of "the ball" are built, SBUILD is called and only one interpretation for the sentence is constructed, which is valid both syntactically and semantically, and reads to the effect that "The inexperienced villain kicked the spherical object".Apart from tackling the problem of attaching prepositional phrases to appropriate constituents (mainly noun phrases or verbs), Boguraev attempts to handle all the ways in which prepositions can occur in a sentence. These are as:the particles of particled verbs, such as "away" in "throw away"; in semi-idiomatic expressions like "green with envy" or where with a particular verb different prepositions impose different meanings on the verb, or express a finer distinction of meaning, e.g. "aim at" vs "aim for"; in obligatory cases, e.g. "look at", "look for", etc.in optional cases, e.g. "go to the theatre with somebody", "rise with the sun", etc.Boguraev's dictionary design allows for the first three types (these are not yet implemented in the CASSEX package); the fourth is handled by preplates, an adaptation of (Wilks 75)'s paraplates.The preplates allow not only for the modification of verbs(as paraplates did) but also nouns.The structure of preplates is the same for both verbs and nouns.The following is the preplates for "with"*:((*ENT ATTRIBUTE *INAN) WITH1) ((MOVE INSTRUMENT THING) WITH2) ((NOTHAVE MANNER *MAR) WITH3) ((STRIK INSTRUMENT *INST) WITH4) ((CHANGE INSTRUMENT *INST) WITH5) ((CAUSE INSTRUMENT *INST) WITH6) (((SEE SENSE) INSTRUMENT (SEE THING)) WITH7) ((*DO MANNER *MAN) WITH8) ((*HUM ACCOMPANIMENT *HUM) WITH9) ((*DO ACCOMPANIMENT *HUM) WITH10)The actual preplate contains three elements.The first is the * "WITH1", "WITH2", etc, are attached to the original preplates in Boguraev's system so as to meet the need of generating Chinese in later stage. "With" appearing in different preplates can have different equivalent in Chinese.In the following text, when talking about preplates, we will mean the actual preplate triples.preferred semantic category of whatever constituent is being modified (a verb or a noun phrase); the second is the case relation between verb (or NP) and the postmodifying PP; the third is the required semantic category of the head noun of the postmodifying PP.To show how preplates works in attaching PPs, consider the sentence(2) I hit the man with the hammer.The PP, or rather the head of the PP, is "hammer". Its head primitive is INST. The PP can either modify the verb "hit", whose head primitive is STRIK, or the NP "the man", whose head primitive is MAN.Two of the above preplates match. Firstly, the preplate (*ENT ATTRIBUTE *INAN) because MAN ("the man") is an *ENT and INST ("the hammer") is INAN. Hence, the PP is tied to the NP: "the hammer" is an ATTRIBUTE of "the man". The sentence could be paraphrased as "I hit the man who had the hammer". Secondly, (STRIK INSTRUMENT INST) because the head primitives of "hit" and "hammer" are STRIK and INST. The PP is tied to the verb; "the hammer" has an INSTRUMENT relation to the verb. According to this case relation and PP attachment, the sentence could be paraphrased as "I hit the man with the hammer that I had".As was mentioned earlier, the semantic representations delivered by the CASSEX package are dependency trees with verbs as the most important nodes and case slots as their daughters. The representation for sentence (1) "The green crook kicked the ball" is as follows:(CLAUSE (TYPE NIL) (QUERY NIL) (TNS PAST) (ASPECT NIL) (MODALITY NIL) (NEG NIL) (V (KICK ((*ANI SUBJ) ((*PHYSOB OBJE) ((THIS (MAN PART)) INST) STRIK)) (OBJECT ((BALL1 (NOTFLOW THING)) (NUMBER SINGLE) (QUANTIFIER SG) (DETERMINER ((DET1 ONE))))) (AGENT ((CROOK1(((NOTGOOD ACT)OBJE)DO) (SUBJ MAN)) (STATE (GREEN4 ((MAN POSS)(((NOTMUCH (TRUE THINK)) (SUBJ KIND)))))) (NUMBER SINGLE) (QUANTIFIER SG) (DETERMINER ((DET1 ONE))))))))These representations, as we can see above, clearly show the syntactic structure and case relations between word-senses within constituents and between constituents. The surface sentence, together with the word order, however, has been lost: we don't to carry it along, like many MT system do (e.g. Liu 81), because dependency trees provide enough information for generating Chinese.The major improvement of the CASSEX package over Boguraev's system is its ability to process conjunctions. In order to achieve this, grammars specifically designed for conjunctions have been incorporated into the system (see Huang 83 for detail). The CASSEX package deals with sentences containing Gapping, Right Node Raising or Reduced Conjunction, as well as the common cases of "and" conjunction. As for the representation of conjunctions, I follow (Ross 67)'s line, treating them as sisters of the conjuncts. The following are two examples.(3)The man with the telescope and the umbrella kicked the ball.The man kicked the ball and the woman threw the ball.The Generator boguraev's generator: The generation procedure in Boguraev's system is used for providing paraphrases of the original input sentences. It contains three main steps:Selection of the main verb from a set of verbs synonymous with the verb-sense in the semantic representation given by the analyser, and selection of the rest of the target language words (here the target language is English). This step reduces the number of possible output verb synonyms to just one.Definition of the syntactico-semantic relationships. This is realised by the production of an environment network which contains both syntactic and semantic information relevant to the contextual environment (i.e., the information stored in the Wilksian word-sense formula) of the main verb.Actual output of the generated sentence. This phase makes extensive use of the target language dictionary and grammar rules and makes sure that the generated sentence is a syntactically well-formed string of words.The generator works impressively, producing well-formed paraphrases for many ambiguous sentences.The Chinese Generator Boguraev's generator doesn't suit our purpose very well, however, for several reasons. First, it was written for paraphrasing in English, hence its verb-centred nature (emphasis on main verb selection; the production of the environment network around the verb). In Chinese, the verb is less important (you can have sentences without verbs at all), while word order plays a vital role. Second, it is unable to handle coordinate constructions. Last but not least, it could have been written in a more concise and more straightforward way (at least for the purpose of generating Chinese).Our generator is composed of a set of LISP functions listed below:GENERATE GEN_SENTENCE GEN_CLAUSE GEN_STN_HEAD GEN_SUBJECT GEN_VERB GEN_OBJECT GEN_INDOBJ GEN_DOBJ GEN_MOBJ* GEN_POST_VERB_MODThe top one, GENERATE, takes as its argument a semantic representation and returns as output a Chinese sentence. It sets a global variable STN_SUBJ for later use (conjunction reduction), and calls a function STN_TAIL to get the appropriate sentence ending punctuation.The main function GEN_SENTENCE is called within GENERATE. It checks whether there is a conjunction at clause level; if there is, it calls GEN_SENTENCE recursively to process the conjuncts one by one (each conjunct may itself be comprised of a conjunction and two or more clause-conjuncts). Then we have the basic clause constructor, GEN__CLAUSE, which outputs single clauses. We decompose GEN_CLAUSE into specialists for constructing the major constituents of the clause: GEN_SUBJ, GEN_VERB, GEN_OBJ and GEN_POST_VERB_MOD. The building blocks needed for those specialists (i.e., noun phrases, preposition phrases, adjective phrases, etc.) are supplied by functions GEN_NP, GEN_PP, and GEN_ADJP.The Chinese language is basically an SVO language, though there are cases where the pattern SOV or OSV or even OVS occurs. We can rewrite any Chinese sentence in an SVO pattern while maintaining the fundamental meaning structure of the sentence. A text containing such sentences may be boring to read, but the economy achieved within CASSEX by having only one sentence pattern is much more important to us. Therefore, in our generation procedure, a uniform pattern SVO is assumed. This determines the definition of the function GEN_CLAUSE: The function GEN_STN_HEAD returns any adverbial indicating time (e.g., the Chinese equivalents of "yesterday", "in 1983", etc.), working on the case string TIME_LOCATION_STR. GEN_SUBJ works on AGENT_STR; GEN_INDOBJ on RECIPIENT_STR; GEN_DOBJ on OBJECT_STR, and so on. Each of these functions check the occurrences of conjunctions, premodifiers and postmodifiers and produce noun-phrases accordingly. The function GEN VERB returns the main verb together with adverbials indicating PLACE_LOCATION, REASON, MANNER, or INSTRUMENT in the order as listed above; all of them precede the verb. This function produces time marker(s) as well; there are five of them in Chinese: LE, ZUO, GUO, JIAN and ZAI. Time marking in Chinese is far less strict than in English (very often additional means are employed to indicate time. A detailed contrastive description of time marking in Chinese and in English is impossible here, though). The function GEN_POST_VERB_MOD takes RESULT_STR or GOAL_STR, and returns adverbials (or adverbial clauses) indicating the result or the goal of the action the verb describes (e.g., "to kill Mary" in "John made a gun to kill Mary").(In most cases, each sense of an English word (as defined in CASSEX's dictionary) has a single Chinese equivalent (a surface Chinese word). Sometimes one English sense has more than one Chinese equivalent, depending on the context. For instance, "wear" in the sense of "to carry or have (a garment, etc.) on one's person as * In the notation of dependency grammar we adapt, a major constituent of a given sentence is a constituent immediately dominated by the main verb of the sentence. clothing, ornament, etc." should be translated as "chuan" in "wear clothes, shoes, stockings, etc"; "daih"* in "wear a hat, jewels, glasses, etc."; and "daa" in "wear a tie". I plan to resolve this multi-choice problem by having extra semantic primitives providing finer word-sense discrimination in the dictionary so that, in the semantic representation produced by the analyser, each word-sense will have just one Chinese equivalent. Then, when generating Chinese words, we just extract those equivalents from the bilingual dictionary where each entry is headed by an English word-sense instead of a word.Conjunction Reduction (Ross 67) defines Conjunction Reduction as follows (p.97):We propose a rule of Conjunction Reduction which Chomsky-adjoins to the right or the left of the coordinate node a copy of some constituent which occurs in all conjuncts on a right or left branch, respectively, and then deletes the original nodes.The semantic representations delivered by the CASSEX package are structures with the deleted constituents recovered. For instance, the representation produced for the sentence 5The man kicked and threw the ball. In the generation stage, in order to get well-formed Chinese sentences, we must apply the Conjunction Reduction rule. Only forward deletion of the subject in a conjoined clause and of the attribute in a conjoined NP is obligatory in Chinese (i.e., we only Chomsky-adjoin to the left of the coordinate node a copy of the repeated constituent before deleting the original nodes). This is implemented in our generator so that the output for (5) is RENX TIX LE QIUX, REN LE QIUX. man kick PARTICLE ball throw * I use letters to indicate the four tones for Chinese characters: zero -1st tone; x -2nd tone; repetition of the first letter of the vowel -3rd tone; and h -4th tone. Examples: MA, MAX, MAA, MAH. conclusion: The CASSEX package and the generator are written in RUTGERS-UCI LISP and implemented on the University of Essex's PDP-10 computer. A couple of dozen of English sentences, all of them containing the conjunction "and" and involving Gapping or Right Node Raising as well as the common cases of coordination, are tested with the program and good quality Chinese sentences are generated (see Appendix). The project is still in the experiment stage, however. More work needs to be done before it becomes a practical English-Chinese MT system. introduction: A Natural Language (NL) generator can be a system on its own right, as is (Meehan 76)'s TALE_SPIN which generates stories. More usually, however, a generator is part of a larger system, which generates surface text from an intermediate data structure produced by another component of the system, the analyser.The generation component of a NL system plays a twofold role: firstly, it tests whether or not the output of the analysis component is correct, thus providing a kind of feedback to the analyser writer. For instance, (Goldman 75)'s generator BABEL detects that in the PARAPHRASE MODE, (Schank 75)'s conceptual analyser MARGIE fails to find the "reader" of the book:INPUT: Reading the book reminded Rita to loan the book to Bill. OUTPUT: Rita remembered to give the book to Bill and she expects him to return it to her because someone read the book.Secondly, if the analysis output is correct, it tests whether or not the representation is good, in terms of the cost and efficiency involved in getting the final result usable to the user (inferences, paraphrases, summaries, answers, or translations, depending on the purpose of the system). Therefore, although generation has "traditionally been the poor relation in NL work" (Cater 81, p.30), a good generator is obviously a necessity to all NL workers.For generating surface text from an intermediate data structure, we can either employ a connected body of grammar rules, most often an ATN generation grammar (Goldman 75, Simmons and Slocum 72, and Burton 76), sometimes an ATN for both analysing and generation (Shapiro 82 ); or we can use a set of functions or specialists (Boguraev 79, Cater 81).The generation procedure described in this paper takes the latter approach of using a set of functions because it is more straightforward and more economical to implement (you don't need another interpreter to run the generation ATN, for instance). Used in conjunction with the CASSEX package, an English sentence analyser, the generator produces good quality Chinese translations for a group of English sentences all of which contain the conjunction "and". The analyser and the generator comprise a prototype English-Chinese Machine Translation (MT) system. In this paper I will review the CASSEX package first, then give a description of the generation procedure.The CASSEX package Appendix:
null
null
null
null
{ "paperhash": [ "huang|dealing_with_conjunctions_in_a_machine_translation_environment", "scha|semantic_grammar:_an_engineering_technique_for_constructing_natural_language_understanding_systems", "meehan|the_metanovel:_writing_stories_by_computer", "ross|constraints_on_variables_in_syntax" ], "title": [ "Dealing With Conjunctions in a Machine Translation Environment", "Semantic grammar: an engineering technique for constructing natural language understanding systems", "The Metanovel: Writing Stories by Computer", "Constraints on variables in syntax" ], "abstract": [ "The paper presents an algorithm, written in PROLOG, for processing English sentences which contain either Gapping, Right Node Raising (RNR) or Reduced Conjunction (RC). The DCG (Definite Clause Grammar) formalism (Pereira & Warren 80) is adopted. The algorithm is highly efficient and capable of processing a full range of coordinate constructions containing any number of coordinate conjunctions ('and', 'or', and 'but'). The algorithm is part of an English-Chinese machine translation system which is in the course of construction.", "One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.", "Abstract : People draw on many diverse sources of real-world knowledge in order to make up stories, including the following: knowledge of the physical world; rules of social behavior and relationships; techniques for solving everyday problems such as transportation, acquisition of objects, and acquisition of information; knowledge about physical needs such as hunger and thirst; knowledge about stories their organization and contents; knowledge about planning behavior and the relationships between kinds of goals; and knowledge about expressing a story in a natural language. This thesis describes a computer program which uses all information to write stories. The areas of knowledge, called problem domains, are defined by a set of representational primitives, a set of problems expressed in terms of those primitives, and a set of procedures for solving those problems. These may vary from one domain to the next. All this specialized knowledge must be integrated in order to accomplish a task such as storytelling. The program, called TALE-SPIN, produces stories in English, interacting with the user, who specifies characters, personality characteristics, and relationships between characters. Operating in a different mode, the program can make those decisions in order to produce Aesop-like fables. (Author)", "Massachusetts Institute of Technology. Dept. of Modern Languages and Linguistics. Thesis. 1967. Ph.D." ], "authors": [ { "name": [ "Xiuming Huang" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. J. H. Scha" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Meehan" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Ross" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null ], "s2_corpus_id": [ "7526396", "263227606", "44474741", "60624374" ], "intents": [ [], [ "methodology" ], [ "background" ], [ "background" ] ], "isInfluential": [ false, false, false, true ] }
- Problem: The paper aims to describe the CASSEX package, a parser that generates semantic representations of English sentences and translates them into Chinese. The focus is on resolving linguistic ambiguities and processing conjunctions in the generation of Chinese translations. - Solution: The paper proposes a generation procedure using a set of functions to produce high-quality Chinese translations for English sentences containing the conjunction "and". This approach is more straightforward and economical compared to other methods, contributing to the development of an English-Chinese Machine Translation system.
492
0.00813
null
null
null
null
null
null
null
null
4e5b845f7095f2d34c55a2656d5c321230a8b278
237558787
null
Language or information: a new role for the translator
After three days of hearing about machine translation, machine-aided translation, terminology, lexicography, in fact about the methodology and techniques that are making it possible to escalate the information flow to unprecedented proportions, I am astounded at my temerity in agreeing to speak about so pedestrian a subject as the role of the human translator. In vindication, may I say that I have spent a considerable number of years in training future generations of translators, so that this is perhaps an act of self-justification. I might add that what I have heard this week has not led me to believe that the translator's skills, unlike the compositor's, have become obsolete. I am convinced, however, that these skills must be adapted and expanded so that the translator can continue to play his vital role in the dissemination of information and as a "keeper of the language".
{ "name": [ "Rommel, Birgit" ], "affiliation": [ null ] }
null
null
Proceedings of the International Conference on Methodology and Techniques of Machine Translation: Processing from words to language
1984-02-01
0
1
null
Who is this translator? Why is he a translator? What skills does he have and how has he acquired them? What is his professional status? How can he achieve job satisfaction? Will he have any part to play in the development and application of machine and machine-aided translation? Must the established translator learn new tricks and should the translator-in-training be offered an entirely new training programme? These were only some of the questions that sprang to mind when I was asked to present the translator's view to wind up this conference. It will be useful to find answers to some of these questions before going on to the main body of my paper.Let us take a look at the average freelance translator today. Although working conditions may well differ greatly from country to country, translators probably concur in regard to professional ethos. (For guidance and support the translator can refer to FIT'S Translator's Charter as well as UNESCO's "Recommendation on the legal protection of translators and translations and the practical means to improve the status of translators' adopted in 1976.) In this connection I should perhaps mention the special case of a translator who works in a country where his mother tongue is not spoken. As an English native speaker, for example, I work in an entirely German-speaking environment. This has the disadvantage of the ear and eye both being more readily attuned to German, my source language, and the danger of succumbing to its structures and idiom is exacerbated. I have chosen to take the freelance as an example of the average translator because again the bulk of my experience is in that area. The working environment apart, however, there is not much to choose between the work done by a freelance and that done by either a staff or an agency translator. To return to my earlier questions: why does the translator translate and how has he acquired his skills? Up to the 1950's, the translator usually came from either an academic language background and then moved on to special subjects like engineering, law or banking; or his background was economics, for example, or metallurgy, and he used his specialized knowledge of a subject in association with his language skills -so that quite often he entered the translating profession quite by chance. During the past twenty years an increasing number of translation institutes and university departments has been established to train future translators either at undergraduate or at graduate level -the latter often catering for postgraduates with a good command of foreign languages, but a degree in a scientific or business discipline. Translators today could have any of these backgrounds, but specialized training is increasingly in demand so that in future this will probably be the rule rather than the exception. Why translate? Obviously because the translator has a natural facility in languages; enjoys working with them; finds satisfaction in rendering a text in a foreign language into something that reads like an original in his mother tongue. He will only enjoy translation if he has certain qualities: self-discipline, an enquiring mind, some humility, a certain degree of pedantry, the ability to work alone but still keep in touch, not only self-criticism and knowing his own limitations, but being able to accept criticism. He must have a good grounding in the culture, life and institutions of the countries of both his source and his target languages (SL/TL) and be prepared to invest time and effort not only in keeping his languages really up to date but also in staying abreast of developments in his various fields of specialization and in working up new subjects. Versatility, flexibility and reliability should be his hallmarks.And this paragon is to be replaced by a mere machine? Well, no, that is hardly likely. From what I have heard here this week and from my own very limited experience of MT and text processing, the machine should make it easier for the translator to demonstrate his virtues, to highlight them as it were. He must, however, learn to accept and to use the machines as the aids they are intended to be.As a part-time freelance translator myself (the rest of my time is taken up with teaching my future peers) I am convinced that the translator's profession cannot but benefit from the advent of new technology, be this in the form of fully automatic high quality machine translation systems or machine aided translation including the whole range of equipment and software now available on the market -which will probably increase considerably in the next few years. I should like to digress here to discuss briefly what I consider will be the main positive impact for the translator of machine aids in the broadest sense.It seems to me that in the future the translator will be expected to wear two hats: the translator's and the translation consultant's. In the latter guise he will advise the client on the best method of solving his particular translation problem. This means first of all that the text itself will undergo much more careful scrutiny than has often been the case hitherto, with the main focus on the reasons for commissioning the translation. Advice can then be given on the most cost-effective translation in terms of user requirements. Repetitive tasks such as texts listing testing specifications, describing assembly procedures, giving directions for use, etc. can safely be ready stored as "boiler-plate" information in a particular format and adapted as required. This has two advantages: it relieves the translator of an exceedingly tedious task (I speak from bitter experience) and ensures that no unnecessary errors creep in. (I am convinced that had I always been supplied with standard machine-translated instructions for certain foreign appliances I should have managed to blow less fuses and cut fewer fingers in trying to get them to work.) Translation texts that are intended for information only, particularly those that will simply be scanned for the information they contain and then discarded, are often a waste of the translator's skills. Nor would it be cost-effective to employ such skills for this purpose. The style is of little consequence, and the quality of the presentation is usually also not important. Naturally if the job is not repetitive there can be no standard format, but standard vocabulary should be available and a machine or machine-aided translation could be supplied as a computer printout, for example, marked "for information only". The advantages of using machine translation for large-scale jobs, particularly where the time factor is of the essence, has already been clearly demonstrated during the course of this conference.As a result, the translator will have time to concentrate on those texts intended for publication, including radio and TV, advertising, slide shows, etc. But even if the translator's work is confined to texts that can be said to merit his skills, he will still be well advised to use machine aids. He can use a word processor on which to prepare his translation, with all the editing facilities that it offers. This means that he can spend much more time polishing his first draft without needing to retype. (This is particularly important for the translator working abroad, who invariably has to do his own typing -the local typists make too many errors when working from dictation.) There is no doubt at all that a wp is an invaluable aid to improving the quality of a translation. If the translator uses a micro computer he can have all the advantages of a wp (though perhaps it is not always quite as convenient to use editing software as it is to use a dedicated wp) with the added plus of having an incredibly wide choice of applications software ranging from spelling checks and style control to a program for recognizing split infinitives! A further advantage here is that if he feels he would like to arrange his personal glossaries in a certain way with certains forms of cross referencing, he can commission programs to be written or, ideally, write them himself. After all, he would really only need to learn another language! I started experimenting with micros and wps last autumn, and found that the final printed version of the texts I submitted to my clients were certainly more polished than I had previously managed, given the usual "for yesterday" deadline. This was only partly because the actual typing was a one-off chore. More important, revising and editing were almost a pleasure (the beginner's admiration for 'clever' equipment is very much to the fore here).The total time spent, however, was generally even less than I would have spent on previous similar jobs, and this despite my lack of experience in the use of such equipment. For me there is no question at all that given the tight deadlines invariably set, the freelance translator using machine aids intelligently can easily increase his volume of output and probably also improve the quality -if only from the point of view of presentation. He can save time by building up his own retrieval system, and by accessing national and international term banks such as Eurodicautom. In the long run, of course, he will also benefit from electronic publishing, especially if dictionaries and other reference works are no longer necessarily produced as hard copy. I have found that word-processed work is infinitely more acceptable to the client, so that it seems likely that the translator who has adjusted early to the electronic age will find that his clients multiply -particularly if he can supply his work on diskette compatible with his client's equipment. The translator's working procedures must change, and the resulting streamlining will mean greater speed and higher quality. This can only add up to more job satisfaction, and it is to be expected that earnings will also rise -what more could one wish for?This panegyric on machine aids is, of course, only theory. I have not considered the financial investments required, the running and servicing costs, nor indeed the possible competition between man and machine -though I doubt very much whether the "quality" translator is in any danger. But there is one fear that I have heard expressed repeatedly in "old-fashioned" translator circles, and one that has also been voiced by many -discriminating -clients: What is going to happen to the language? They are afraid that through machine translation language, style and perhaps also ideas will become arid, stereotyped. The scenario they so often envisage is a mixture of Alice at cross purposes with Humpty Dumpty and Syme extolling the virtues of Newspeak to Winston Smith in Nineteen Eighty-Four; not to mention the somewhat odd statements that are issued from the Pentagon and other sources in Washington, which the average English native speaker finds so hard to decode. I can understand their fear. If we take Through the Looking-Glass, for example, Humpty Dumpty explains: "When I use a word, it means just what I choose it to mean -neither more nor less." Alice naturally wonders "whether you can make words mean different things", whereupon Humpty Dumpty states: "The question is which is to be master ...". The machine, perhaps? Or if we look at Nineteen Eighty-Four Syme asks Winston Smith: "Don't you see that the whole aim of Newspeak is to narrow the range of thought? ... Every concept that can ever be needed will be expressed in exactly one word, with its meaning rigidly defined and all its subsidiary meanings rubbed out and forgotten." One envisages pages and pages of translated text being churned out by the machine using a specific limited vocabulary and a standard set of structures. Style will have become a non-word, so of course we shall not miss it. Is this threat to language, thought and culture real or imaginary?The scenario suggests two aspects of language development:Language evolving naturally in a multi-media society, and language adjusted artificially to expedite batch processing. When we talk about language we usually mean both written and spoken language. New vocabulary, different shades of meaning and alternative structures can be introduced into either, and the two forms influence each other. In this context we are mainly concerned with the written language, the language which is so often criticized as growing increasingly slovenly or, indeed, incomprehensible. Look at the letter page in The Times, for example, almost any day of the week and there will be some comment on incorrect usage, lack of respect for established rules, a general decline of the language. The English language has been 'going to the dogs' for several centuries nowmore than 250 years ago, in 1712, Jonathan Swift wrote in a letter to Harley that he had a Proposal for correcting, improving and ascertaining the English Tongue with the aim of "fixing our language for ever" and establishing an academy to ensure a permanent standard. Swift was certainly a great stylist, but I am glad that his Proposal was not taken up. Language is a living thing moulded both by its own inner laws of evolution and by the outside influences of the human society it serves. It is linked to a certain cultural environment and will inevitably reflect changes and developments that take place. The high degree of literacy in the late 20th century and the parallel growth of variant forms of English in territories overseas together with a wide range of information media has inevitably had some impact on the written word. Major events can always be traced in language usage, and it is naive to assume that the new Industrial Revolution which has ushered in the microelectronic age will not leave its mark on the language.Before the Second World War developments were considerably slower than they are now, so that changes in language were less clearly perceptible. Developments today in communications and information transfer are so rapid that the changes are more striking and we are aware of them, and consequently less ready to accept them. They no longer insinuate themselves into our subconscious as they used to; instead we are bombarded with new expressions by all the media collectively. This mass assault immediately calls for defensive action so that the improvements which would lead to greater clarity and simplicity are also under fire. It seems that almost every day we are expected to assimilate new acronyms, technical terms, euphemisms and jargon although the previous day's quota has not yet been digested. Management consultants indulge in "headhunting", the tax-man suggests "revenue enhancements" and in Pennsylvania chickens are "depopulated" in an attempt to contain an influenza virus. Naturally no-one working with language would champion a statement like "micromanaging a country intelligencewise until about a certain time frame" because of the decoding process required, but I can see no objection to the use of "editorialize" in preference to "expressing an opinion in the form of an editorial". Some terms and phrases will become established but many, particularly the euphemisms, will be supplanted as they begin to take on the connotations of the terms they were coined to replace. It is unlikely that they will find their way into computer dictionaries and term banks. T.S. Eliot summed up the whole process: "Our language, or any civilized language, is like the phoenix: it springs anew from its own ashes." Now let us take a look at the other aspect. It appears to be generally accepted that most MT systems require either pre-editing or post-editing to a lesser or greater degree. As I understand it, pre-editing entails writing or adapting a text for translation using a certain vocabulary and a limited number of structures -a kind of "machinespeak" which could well develop along the same lines as any other form of jargon and insinuate itself into standard usage. Where post-editing is required, the machine does the preparatory work, and the translator (post-editor) disambiguates and/or restructures the translated text. Both approaches, then, retain some form of human involvement with the final product and it is up to the linguist, be he translator, post-editor or revisor, to accept responsibility for "the state of the language".At the beginning of this paper I asked whether the translator was likely to be involved in MT. Apart from highly sophisticated algorithms the system essentially comprises regularly updated dictionaries. In the latter, surely, the translator's help should be sought and given. Obviously, post-editors must supply regular feedback so that the system and its dictionaries can be continually improved. It would seem to me that post-editing is a new field for which the translator must be specially trained but where he could, with some experience, make a real contribution to upgrading the first machine draft. There is always the danger, of course, that through dint of repetition, the post-editor will no longer perceive aberrations of style, unsatisfactory structures or poor vocabulary and thus accept a form of machinespeak as standard usage. Here again, if the post-editor is working in a country where his native tongue is not the spoken language this danger increases. My ears have been assailed for so many years with what I call Swinglish (Swiss English) that when I now catch myself about to use a German structure or preposition the warning signals are very faint. Of course every translator is aware of the dangers inherent in long periods of exposure to a foreign language, and if this compounded by his also being confronted with "mother-tongue machinespeak" the native language could well suffer. Regular post experience translation workshops, etc. could be of real help here, and they could also be used to counteract yet another threat looming to trap the translator. The more text on screen becomes a commonplace (we might call it computerspeak or videospeak), and the more we learn to accept it in its natural environment, the more difficult it will be to remember that it is not -yet -acceptable in print or as the spoken work.My thesis is that the translator in future will have the dual role of monitoring changes in the language and disseminating information. And this means that much thought must be given to training translators to fit that role. The teaching programmes in translation institutes and university departments must be adapted accordingly. I do not believe that radical changes need be introduced, because most institutes and departments do supply the translators with the basic tools of the trade -but too often they are the tools for yesterday's trade, perhaps even for today's, rarely if ever are they for tomorrow's. What I envisage is a shift of emphasis, a greater degree of specialization, a much keener awareness of the market. Students and staff must recognize that additional demands will be made on the translator, that the traditional skills of analysis and language may still constitute the corner-stone of their teaching programme but cannot be regarded as the complete structure. In translation, as in so many other disciplines, new technology is revolutionizing both job description and work procedures. Certain skills are almost obsolete, others in greater demand. The budding translator must be equipped to carry out his new tasks using the wide range of aids available and increasing emphasis must be laid on information storage and retrieval. But all this, excellent though it may be, must never be regarded as a substitute for mastery of the translator's mother tongue.In practical terms, I imagine that the translation institutes and university departments will develop a broader range of courses so that their students are given the opportunity to specialize according to their talents and inclinations. Terminologists and lexicographers are just as much a part of the translation scene as post-editors and technical translators. The volume of translation will increase as will its variety, and as more specialized systems and equipment are developed specialized personnel must be available to ensure that the right products are used in the right place at the right time. For this it is essential to have the full cooperation of the machine translation industry, the hard and software companies, national and international term banks, etc. Without their support, it is unlikely that really efficient teaching programmes will be developed, suitable equipment acquired or practical training courses set up. Translation is now a recognized profession, machine and machine-aided translation is being developed and refined at a breathtaking speed: it is up to all of us to ensure that the status of the translator is recognized and that highly trained specialists in all aspects of translation are available in a growing market.
null
null
null
null
Main paper: : Who is this translator? Why is he a translator? What skills does he have and how has he acquired them? What is his professional status? How can he achieve job satisfaction? Will he have any part to play in the development and application of machine and machine-aided translation? Must the established translator learn new tricks and should the translator-in-training be offered an entirely new training programme? These were only some of the questions that sprang to mind when I was asked to present the translator's view to wind up this conference. It will be useful to find answers to some of these questions before going on to the main body of my paper.Let us take a look at the average freelance translator today. Although working conditions may well differ greatly from country to country, translators probably concur in regard to professional ethos. (For guidance and support the translator can refer to FIT'S Translator's Charter as well as UNESCO's "Recommendation on the legal protection of translators and translations and the practical means to improve the status of translators' adopted in 1976.) In this connection I should perhaps mention the special case of a translator who works in a country where his mother tongue is not spoken. As an English native speaker, for example, I work in an entirely German-speaking environment. This has the disadvantage of the ear and eye both being more readily attuned to German, my source language, and the danger of succumbing to its structures and idiom is exacerbated. I have chosen to take the freelance as an example of the average translator because again the bulk of my experience is in that area. The working environment apart, however, there is not much to choose between the work done by a freelance and that done by either a staff or an agency translator. To return to my earlier questions: why does the translator translate and how has he acquired his skills? Up to the 1950's, the translator usually came from either an academic language background and then moved on to special subjects like engineering, law or banking; or his background was economics, for example, or metallurgy, and he used his specialized knowledge of a subject in association with his language skills -so that quite often he entered the translating profession quite by chance. During the past twenty years an increasing number of translation institutes and university departments has been established to train future translators either at undergraduate or at graduate level -the latter often catering for postgraduates with a good command of foreign languages, but a degree in a scientific or business discipline. Translators today could have any of these backgrounds, but specialized training is increasingly in demand so that in future this will probably be the rule rather than the exception. Why translate? Obviously because the translator has a natural facility in languages; enjoys working with them; finds satisfaction in rendering a text in a foreign language into something that reads like an original in his mother tongue. He will only enjoy translation if he has certain qualities: self-discipline, an enquiring mind, some humility, a certain degree of pedantry, the ability to work alone but still keep in touch, not only self-criticism and knowing his own limitations, but being able to accept criticism. He must have a good grounding in the culture, life and institutions of the countries of both his source and his target languages (SL/TL) and be prepared to invest time and effort not only in keeping his languages really up to date but also in staying abreast of developments in his various fields of specialization and in working up new subjects. Versatility, flexibility and reliability should be his hallmarks.And this paragon is to be replaced by a mere machine? Well, no, that is hardly likely. From what I have heard here this week and from my own very limited experience of MT and text processing, the machine should make it easier for the translator to demonstrate his virtues, to highlight them as it were. He must, however, learn to accept and to use the machines as the aids they are intended to be.As a part-time freelance translator myself (the rest of my time is taken up with teaching my future peers) I am convinced that the translator's profession cannot but benefit from the advent of new technology, be this in the form of fully automatic high quality machine translation systems or machine aided translation including the whole range of equipment and software now available on the market -which will probably increase considerably in the next few years. I should like to digress here to discuss briefly what I consider will be the main positive impact for the translator of machine aids in the broadest sense.It seems to me that in the future the translator will be expected to wear two hats: the translator's and the translation consultant's. In the latter guise he will advise the client on the best method of solving his particular translation problem. This means first of all that the text itself will undergo much more careful scrutiny than has often been the case hitherto, with the main focus on the reasons for commissioning the translation. Advice can then be given on the most cost-effective translation in terms of user requirements. Repetitive tasks such as texts listing testing specifications, describing assembly procedures, giving directions for use, etc. can safely be ready stored as "boiler-plate" information in a particular format and adapted as required. This has two advantages: it relieves the translator of an exceedingly tedious task (I speak from bitter experience) and ensures that no unnecessary errors creep in. (I am convinced that had I always been supplied with standard machine-translated instructions for certain foreign appliances I should have managed to blow less fuses and cut fewer fingers in trying to get them to work.) Translation texts that are intended for information only, particularly those that will simply be scanned for the information they contain and then discarded, are often a waste of the translator's skills. Nor would it be cost-effective to employ such skills for this purpose. The style is of little consequence, and the quality of the presentation is usually also not important. Naturally if the job is not repetitive there can be no standard format, but standard vocabulary should be available and a machine or machine-aided translation could be supplied as a computer printout, for example, marked "for information only". The advantages of using machine translation for large-scale jobs, particularly where the time factor is of the essence, has already been clearly demonstrated during the course of this conference.As a result, the translator will have time to concentrate on those texts intended for publication, including radio and TV, advertising, slide shows, etc. But even if the translator's work is confined to texts that can be said to merit his skills, he will still be well advised to use machine aids. He can use a word processor on which to prepare his translation, with all the editing facilities that it offers. This means that he can spend much more time polishing his first draft without needing to retype. (This is particularly important for the translator working abroad, who invariably has to do his own typing -the local typists make too many errors when working from dictation.) There is no doubt at all that a wp is an invaluable aid to improving the quality of a translation. If the translator uses a micro computer he can have all the advantages of a wp (though perhaps it is not always quite as convenient to use editing software as it is to use a dedicated wp) with the added plus of having an incredibly wide choice of applications software ranging from spelling checks and style control to a program for recognizing split infinitives! A further advantage here is that if he feels he would like to arrange his personal glossaries in a certain way with certains forms of cross referencing, he can commission programs to be written or, ideally, write them himself. After all, he would really only need to learn another language! I started experimenting with micros and wps last autumn, and found that the final printed version of the texts I submitted to my clients were certainly more polished than I had previously managed, given the usual "for yesterday" deadline. This was only partly because the actual typing was a one-off chore. More important, revising and editing were almost a pleasure (the beginner's admiration for 'clever' equipment is very much to the fore here).The total time spent, however, was generally even less than I would have spent on previous similar jobs, and this despite my lack of experience in the use of such equipment. For me there is no question at all that given the tight deadlines invariably set, the freelance translator using machine aids intelligently can easily increase his volume of output and probably also improve the quality -if only from the point of view of presentation. He can save time by building up his own retrieval system, and by accessing national and international term banks such as Eurodicautom. In the long run, of course, he will also benefit from electronic publishing, especially if dictionaries and other reference works are no longer necessarily produced as hard copy. I have found that word-processed work is infinitely more acceptable to the client, so that it seems likely that the translator who has adjusted early to the electronic age will find that his clients multiply -particularly if he can supply his work on diskette compatible with his client's equipment. The translator's working procedures must change, and the resulting streamlining will mean greater speed and higher quality. This can only add up to more job satisfaction, and it is to be expected that earnings will also rise -what more could one wish for?This panegyric on machine aids is, of course, only theory. I have not considered the financial investments required, the running and servicing costs, nor indeed the possible competition between man and machine -though I doubt very much whether the "quality" translator is in any danger. But there is one fear that I have heard expressed repeatedly in "old-fashioned" translator circles, and one that has also been voiced by many -discriminating -clients: What is going to happen to the language? They are afraid that through machine translation language, style and perhaps also ideas will become arid, stereotyped. The scenario they so often envisage is a mixture of Alice at cross purposes with Humpty Dumpty and Syme extolling the virtues of Newspeak to Winston Smith in Nineteen Eighty-Four; not to mention the somewhat odd statements that are issued from the Pentagon and other sources in Washington, which the average English native speaker finds so hard to decode. I can understand their fear. If we take Through the Looking-Glass, for example, Humpty Dumpty explains: "When I use a word, it means just what I choose it to mean -neither more nor less." Alice naturally wonders "whether you can make words mean different things", whereupon Humpty Dumpty states: "The question is which is to be master ...". The machine, perhaps? Or if we look at Nineteen Eighty-Four Syme asks Winston Smith: "Don't you see that the whole aim of Newspeak is to narrow the range of thought? ... Every concept that can ever be needed will be expressed in exactly one word, with its meaning rigidly defined and all its subsidiary meanings rubbed out and forgotten." One envisages pages and pages of translated text being churned out by the machine using a specific limited vocabulary and a standard set of structures. Style will have become a non-word, so of course we shall not miss it. Is this threat to language, thought and culture real or imaginary?The scenario suggests two aspects of language development:Language evolving naturally in a multi-media society, and language adjusted artificially to expedite batch processing. When we talk about language we usually mean both written and spoken language. New vocabulary, different shades of meaning and alternative structures can be introduced into either, and the two forms influence each other. In this context we are mainly concerned with the written language, the language which is so often criticized as growing increasingly slovenly or, indeed, incomprehensible. Look at the letter page in The Times, for example, almost any day of the week and there will be some comment on incorrect usage, lack of respect for established rules, a general decline of the language. The English language has been 'going to the dogs' for several centuries nowmore than 250 years ago, in 1712, Jonathan Swift wrote in a letter to Harley that he had a Proposal for correcting, improving and ascertaining the English Tongue with the aim of "fixing our language for ever" and establishing an academy to ensure a permanent standard. Swift was certainly a great stylist, but I am glad that his Proposal was not taken up. Language is a living thing moulded both by its own inner laws of evolution and by the outside influences of the human society it serves. It is linked to a certain cultural environment and will inevitably reflect changes and developments that take place. The high degree of literacy in the late 20th century and the parallel growth of variant forms of English in territories overseas together with a wide range of information media has inevitably had some impact on the written word. Major events can always be traced in language usage, and it is naive to assume that the new Industrial Revolution which has ushered in the microelectronic age will not leave its mark on the language.Before the Second World War developments were considerably slower than they are now, so that changes in language were less clearly perceptible. Developments today in communications and information transfer are so rapid that the changes are more striking and we are aware of them, and consequently less ready to accept them. They no longer insinuate themselves into our subconscious as they used to; instead we are bombarded with new expressions by all the media collectively. This mass assault immediately calls for defensive action so that the improvements which would lead to greater clarity and simplicity are also under fire. It seems that almost every day we are expected to assimilate new acronyms, technical terms, euphemisms and jargon although the previous day's quota has not yet been digested. Management consultants indulge in "headhunting", the tax-man suggests "revenue enhancements" and in Pennsylvania chickens are "depopulated" in an attempt to contain an influenza virus. Naturally no-one working with language would champion a statement like "micromanaging a country intelligencewise until about a certain time frame" because of the decoding process required, but I can see no objection to the use of "editorialize" in preference to "expressing an opinion in the form of an editorial". Some terms and phrases will become established but many, particularly the euphemisms, will be supplanted as they begin to take on the connotations of the terms they were coined to replace. It is unlikely that they will find their way into computer dictionaries and term banks. T.S. Eliot summed up the whole process: "Our language, or any civilized language, is like the phoenix: it springs anew from its own ashes." Now let us take a look at the other aspect. It appears to be generally accepted that most MT systems require either pre-editing or post-editing to a lesser or greater degree. As I understand it, pre-editing entails writing or adapting a text for translation using a certain vocabulary and a limited number of structures -a kind of "machinespeak" which could well develop along the same lines as any other form of jargon and insinuate itself into standard usage. Where post-editing is required, the machine does the preparatory work, and the translator (post-editor) disambiguates and/or restructures the translated text. Both approaches, then, retain some form of human involvement with the final product and it is up to the linguist, be he translator, post-editor or revisor, to accept responsibility for "the state of the language".At the beginning of this paper I asked whether the translator was likely to be involved in MT. Apart from highly sophisticated algorithms the system essentially comprises regularly updated dictionaries. In the latter, surely, the translator's help should be sought and given. Obviously, post-editors must supply regular feedback so that the system and its dictionaries can be continually improved. It would seem to me that post-editing is a new field for which the translator must be specially trained but where he could, with some experience, make a real contribution to upgrading the first machine draft. There is always the danger, of course, that through dint of repetition, the post-editor will no longer perceive aberrations of style, unsatisfactory structures or poor vocabulary and thus accept a form of machinespeak as standard usage. Here again, if the post-editor is working in a country where his native tongue is not the spoken language this danger increases. My ears have been assailed for so many years with what I call Swinglish (Swiss English) that when I now catch myself about to use a German structure or preposition the warning signals are very faint. Of course every translator is aware of the dangers inherent in long periods of exposure to a foreign language, and if this compounded by his also being confronted with "mother-tongue machinespeak" the native language could well suffer. Regular post experience translation workshops, etc. could be of real help here, and they could also be used to counteract yet another threat looming to trap the translator. The more text on screen becomes a commonplace (we might call it computerspeak or videospeak), and the more we learn to accept it in its natural environment, the more difficult it will be to remember that it is not -yet -acceptable in print or as the spoken work.My thesis is that the translator in future will have the dual role of monitoring changes in the language and disseminating information. And this means that much thought must be given to training translators to fit that role. The teaching programmes in translation institutes and university departments must be adapted accordingly. I do not believe that radical changes need be introduced, because most institutes and departments do supply the translators with the basic tools of the trade -but too often they are the tools for yesterday's trade, perhaps even for today's, rarely if ever are they for tomorrow's. What I envisage is a shift of emphasis, a greater degree of specialization, a much keener awareness of the market. Students and staff must recognize that additional demands will be made on the translator, that the traditional skills of analysis and language may still constitute the corner-stone of their teaching programme but cannot be regarded as the complete structure. In translation, as in so many other disciplines, new technology is revolutionizing both job description and work procedures. Certain skills are almost obsolete, others in greater demand. The budding translator must be equipped to carry out his new tasks using the wide range of aids available and increasing emphasis must be laid on information storage and retrieval. But all this, excellent though it may be, must never be regarded as a substitute for mastery of the translator's mother tongue.In practical terms, I imagine that the translation institutes and university departments will develop a broader range of courses so that their students are given the opportunity to specialize according to their talents and inclinations. Terminologists and lexicographers are just as much a part of the translation scene as post-editors and technical translators. The volume of translation will increase as will its variety, and as more specialized systems and equipment are developed specialized personnel must be available to ensure that the right products are used in the right place at the right time. For this it is essential to have the full cooperation of the machine translation industry, the hard and software companies, national and international term banks, etc. Without their support, it is unlikely that really efficient teaching programmes will be developed, suitable equipment acquired or practical training courses set up. Translation is now a recognized profession, machine and machine-aided translation is being developed and refined at a breathtaking speed: it is up to all of us to ensure that the status of the translator is recognized and that highly trained specialists in all aspects of translation are available in a growing market. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
492
0.002033
null
null
null
null
null
null
null
null
f978143c412f6cb79be59f929bb8442e42a5b7f4
237558779
null
Machine-aided translation and lexical strategies
The context of this paper is that of a translator wishing to develop dictionaries for the purposes of machine-aided translation (MAT). A description is given of the ways in which lexical items in running text are statistically ``patterned'', depending on whether these so-called ``types'' are left unaltered as they are extracted from the text or whether they are immediately mapped onto the corresponding dictionary look-up form (``lemma'') for the purpose of statistical analysis. It is obvious, of course, that for translation purposes it is necessary to establish appropriate entry-points into the MAT dictionary, but this is a secondary problem. There are two dimensions which can assist the machine-assisted translator to a considerable extent. One such factor is any degree of homogeneity the greater, the better in the texts he wishes to process. Translators specialising in certain subject areas and types of discourse are at an advantage if they wish to use an MAT system. The second factor is that of the so-called ``multi-word unit''. Although all languages have multi-word units, which are semantically atomic, they are particularly important in English, and even more so in English technical terminology. Frequency studies of multi-word units, although they generate large listings of types, can be very useful for MAT. The machine-assisted translator is faced with the need to view his work as consisting of two distinct modes: dictionary elaboration and text transaction. The second mode, of course, provides important feed-back to guide the first. One thing is clear: the translator must be his own lexicographer to a great extent, at least until the time when software houses realise the commercial value of such ``static'' data as general bi-lingual high-frequency dictionaries ana the potential ``constellation'' of carefully designed and delineated bi-lingual glossaries of technical terminology!
{ "name": [ "Knowles, Frank" ], "affiliation": [ null ] }
null
null
Proceedings of the International Conference on Methodology and Techniques of Machine Translation: Processing from words to language
1984-02-01
0
1
null
The context of this paper is that of a translator wishing to develop dictionaries for the purposes of machine-aided translation (MAT). A description is given of the ways in which lexical items in running text are statistically "patterned", depending on whether these so-called "types" are left unaltered as they are extracted from the text or whether they are immediately mapped onto the corresponding dictionary look-up form ("lemma") for the purpose of statistical analysis. It is obvious, of course, that for translation purposes it is necessary to establish appropriate entry-points into the MAT dictionary, but this is a secondary problem.are two dimensions which can assist the machine-assisted translator to a considerable extent. One such factor is any degree of homogeneity -the greater, the better -in the texts he wishes to process. Translators specialising in certain subject areas and types of discourse are at an advantage if they wish to use an MAT system. The second factor is that of the so-called "multi-word unit". Although all languages have multi-word units, which are semantically atomic, they are particularly important in English, and even more so in English technical terminology. Frequency studies of multi-word units, although they generate large listings of types, can be very useful for MAT.The machine-assisted translator is faced with the need to view his work as consisting of two distinct modes: dictionary elaboration and text transaction. The second mode, of course, provides important feed-back to guide the first. One thing is clear: the translator must be his own lexicographer to a great extent, at least until the time when software houses realise the commercial value of such "static" data as general bi-lingual high-frequency dictionaries ana the potential "constellation" of carefully designed and delineated bi-lingual glossaries of technical terminology!
null
null
null
null
Main paper: : The context of this paper is that of a translator wishing to develop dictionaries for the purposes of machine-aided translation (MAT). A description is given of the ways in which lexical items in running text are statistically "patterned", depending on whether these so-called "types" are left unaltered as they are extracted from the text or whether they are immediately mapped onto the corresponding dictionary look-up form ("lemma") for the purpose of statistical analysis. It is obvious, of course, that for translation purposes it is necessary to establish appropriate entry-points into the MAT dictionary, but this is a secondary problem.are two dimensions which can assist the machine-assisted translator to a considerable extent. One such factor is any degree of homogeneity -the greater, the better -in the texts he wishes to process. Translators specialising in certain subject areas and types of discourse are at an advantage if they wish to use an MAT system. The second factor is that of the so-called "multi-word unit". Although all languages have multi-word units, which are semantically atomic, they are particularly important in English, and even more so in English technical terminology. Frequency studies of multi-word units, although they generate large listings of types, can be very useful for MAT.The machine-assisted translator is faced with the need to view his work as consisting of two distinct modes: dictionary elaboration and text transaction. The second mode, of course, provides important feed-back to guide the first. One thing is clear: the translator must be his own lexicographer to a great extent, at least until the time when software houses realise the commercial value of such "static" data as general bi-lingual high-frequency dictionaries ana the potential "constellation" of carefully designed and delineated bi-lingual glossaries of technical terminology! Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
492
0.002033
null
null
null
null
null
null
null
null
cc8f32f2c2f75f90e5c369f1ec92f32083fd1d4a
237558730
null
What is the language of memory?
This paper outlines the mutual beneficial analogies between the structural dynamics of memory and machine translation, both of which are extensively dependent on fundamental pattern recognition problems. Basically, both processes are faced with a similarly structured problem -namely, the
{ "name": [ "Mbaeyi, Peter Nwoye O." ], "affiliation": [ null ] }
null
null
Proceedings of the International Conference on Methodology and Techniques of Machine Translation: Processing from words to language
1984-02-01
0
0
null
This paper outlines the mutual beneficial analogies between the structural dynamics of memory and machine translation, both of which are extensively dependent on fundamental pattern recognition problems. Basically, both processes are faced with a similarly structured problem -namely, the problem of condensing large quantities of data into intelligently interpretable smaller volumes (comprised of basic "information clusters"). For machine translation, the alphabets and words of a language (that make up an essay) define these data, while the multiplicities of physico-chemical objects of sensory perception constitute, amongst others, the data compression problem facing the memory functions of the brain. For the neural systems (underlying the memory functions of the brain) recent advancements in generalized quantum theoretical methods provide some bases. While these foundations will not be discussed here in any detail, they are used to define the components of a language compatible with memory dynamics. Essentially, these culminate in associative (quantum) logical problems with analogical counterparts in linguistics and the use of compartmentalization cum associative logic in essay interpretations. For purposes of computational linguistics, this paper makes these analogies precise (on quantitative analytical basis), with emphasis on discrete recursive generation of larger structures, and equivalents of coding and decoding for machine translation process.
null
null
null
null
Main paper: : This paper outlines the mutual beneficial analogies between the structural dynamics of memory and machine translation, both of which are extensively dependent on fundamental pattern recognition problems. Basically, both processes are faced with a similarly structured problem -namely, the problem of condensing large quantities of data into intelligently interpretable smaller volumes (comprised of basic "information clusters"). For machine translation, the alphabets and words of a language (that make up an essay) define these data, while the multiplicities of physico-chemical objects of sensory perception constitute, amongst others, the data compression problem facing the memory functions of the brain. For the neural systems (underlying the memory functions of the brain) recent advancements in generalized quantum theoretical methods provide some bases. While these foundations will not be discussed here in any detail, they are used to define the components of a language compatible with memory dynamics. Essentially, these culminate in associative (quantum) logical problems with analogical counterparts in linguistics and the use of compartmentalization cum associative logic in essay interpretations. For purposes of computational linguistics, this paper makes these analogies precise (on quantitative analytical basis), with emphasis on discrete recursive generation of larger structures, and equivalents of coding and decoding for machine translation process. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
492
0
null
null
null
null
null
null
null
null
bb9ff75b8f9a445340d1616de75dd93bc1a945ae
237558707
null
Production of sentences: a general algorithm and a case study
In this paper a procedure for the production of sentences is described, producing written sentences in a particular language starting from formal representations of their meaning. After a brief description of the internal representation used, the algorithm is presented, and some results and future trends are discussed.
{ "name": [ "Adorni, Giovanni and", "Massone, Lina" ], "affiliation": [ null, null ] }
null
null
Proceedings of the International Conference on Methodology and Techniques of Machine Translation: Processing from words to language
1984-02-01
9
0
null
Production of sentences by computer has been approached for several years with the general goals of generating random sentences to test a grammatical theory or converting information from an internal representation into a natural language sentence. The first approach is more oriented toward theoretical linguistics than toward functional natural language processing systems. The objective of implementing a generation system of this sort is to test the descriptive adequacy of the test grammar [10, 2] . The second approach is to take some internal representation of the "meaning" of the sentence and to convert it into a surface-structure form, that is into an appropriate string of words (see, for example [4, 9, 6, 7, 3, 1] ).In this paper this second line has been followed and a procedure for the production of sentences is described producing written sentences in a particular language from formal representations of their meaning [5] . A characteristic of the meaning representations used is that they are "relatively" universal. In fact they bear no trace of the language into which they will be eventually mapped; it is the production procedure that imposes a language specific form on the final sentence produced. In principle, from a given meaning representation, sentences in a number of different languages can be produced, provided that the production procedure for each language is available. The meaning representations we are referring to in the paper do not involve any analysis of the meaning of the words. However, their format is compatible with lexical decomposition, and the procedure that we will describe accepts, without need for modification, lexically decomposed meaning representations.In the next section the internal representation used is briefly outlined and the structure of the vocabulary is described; in section 3 the algorithm is presented and, finally, in section 4 some results and future trends are discussed.The basic operations of LP are (a) identifying a particular unit in the input list (MR) (b) finding a lexical entry in the vocabulary that includes that unit as part of its meaning. The choice of the lexical entry is driven by a "principle of maximum overlap": if the vocabulary includes two or more entries having that unit as part of their meaning, LP selects the entry that shares the maximum number of units with the input list. Basically LP is a procedure specifying which lexicalizations have to be made and in which order; it is made up of three blocks of steps. The first block (steps 1-6) is concerned with the lexicalization of the sentence's predicate and subject. The second block (steps 7-8) lexicalizes any other additional argument which can be present. Finally, the third block (steps 9-10) concerns the lexicalization of adverbials, possibly included in the sentence. In the following each step is described. -STEP 1: lexicalizing the sentence's declaration All input lists contain one and only one unit with a C argument and the predicate MAIN; the C argument is called CMAIN. The first thing LP does is to identify the MAIN argument in the input-list (IL) and its "declaration", i.e. the unit that declares the content of the MAIN argument. In the list of fig. 1 (which will be considered throughout this section) the MAIN argument is C4 and it's declaration is the unit (C4 (P_SEE X1 X2)). The corresponding lexical entry (the root verb SEE-) is then selected from the vocabulary on the basis of the semantic units. In this entry a test on the subject is present (see fig. 2 ), which is immediately solved to select the right control units, making use of the unit (C10 (SUBJ X1)). LP verifies also if in the control units a LESS request is present (not in our example); if so, another entry is selected on the basis of the LESS arguments. The LESS request can be recursive, that is it can appear also in the new entry; the selection of entries continues until an entry without LESS request is found. The suffix of the verb is not lexicalized at this point of the procedure because the feature units needed to select it have not yet been analysed. In fact, to choose, for example, "-s" instead of "-<Z>" (zero morpheme), LP must know that the subject is in the third singular person. For this reason, the root SEE-is stored in a temporary memory called ENTRY_MEMORY (EM) together with the corresponding node C4, waiting for the right suffix to be selected. The arguments of the sentence's declaration are stored in a list called ARG_LIST (AL) and C4,C5,C10 units are deleted from the input list since they are not needed any more. At the end of step 1 the situation is as follows:IL: ((Cl (P_CHILD X1)) AL: (X1 X2) (C2 (ONE X1)) (C3 (DEF X1)) (C6 (PAST C4))EM: ((C4 SEE-)) (C7 (P_APPLE X2)) (C8 (MANY X2)) (C9 (DEF X2)) ) -STEP 2: lexicalizing the subject All the units referring to the subject of the sentence are selected from the input list; in our example they are (a) (C1 (P_CHILD X1)), (b) (C2 (ONE X1)), (c) (C3 (DEF X1)). One or more lexical entries are extracted from the vocabulary using the maximum overlap principle: firstly LP looks for an entry containing all three semantic units. If it does not find it (and this is what happens in our example), it looks for an entry presenting two of them. Since no such entry is present in the vocabulary three different entries are selected, one for each unit. The entry corresponding to unit (a) is CHILD-, which is a root; it is then added to EM. For unit (b), "-CD" is selected: it is also added to EM since it is a suffix. Unit (c) corresponds to the entry THE, which is a full voice; it is then stored elsewhere, in a special list called WORD_MEMORY (WM).The suffix "-Ø" is then associated to the root CHILD-: (C1 CHILD-) and (C2 -Ø) are deleted from EM and a new element (Cl CHILD--©) is added to WM. Moreover, the argument X1 is deleted from the ARG_LIST and units C1.C2.C3 are removed from the input list. Here is the situation at the end of step 2: IL: ((C6 (PAST C4)) EM: ((C4 SEE-)) (C7 (P_APPLE X2)) WM: ((C3 THE)(C1 CHILD--Ø)) (C8 (MANY X2)) AL: (X2) (C9 (DEF X2)) ) -STEP 3: lexicalizing PROGR, if present For some sentences a PROGR ("progressive") predicate is present in a unit of the input list. Examples of such sentences are: the child is laughing, the child is being nice, the child will be studying and so on. In all these cases, LP lexicalizes that unit at step 3, producing in EM the suffix "-ing". The suffix is associated in WM either to the root verb lexicalized at step 1 (e.g. the child is laughing) or to the root verb "be-" produced by a LESS request (e.g. the child is being nice). -STEP 4: lexicalizing PERF, if present This step is devoted to the lexicalization of the PERF predicate, if the input list contains it. Examples: the child has laughed, the child will have laughed, the child has been laughing, the child has been nice and so on. LP produces in EM the suffix "-ed" which is associated in WM either to the root verb lexicalized at step 1 (e.g. the child has laughed) or to the root verb "be-" produced by a LESS request (e.g. the child has been nice). -STEP 5: lexicalizing FT, if present All sentences include necessarily a unit with one of the following predicates: PRES ("present"), PAST ("past"), FUT ("future"). These three predicates are collectively called FT ("finite tense"). Lexicalization of PRES and PAST predicates produces in EM respectively a suffix "-s" or "-0" and "-ed" which is associated in WM to a root according to the rule already used for steps 3 and 4. In our example a PAST predicate is present, a suffix "-ed" is generated and added to EM. EM: ((C4 SEE-)(C4 -ED)) It is then associated to the root verb SEE-and stored in WM.The PAST unit is then removed from the input list.A FUT predicate is lexicalized with the whole word WILL (or SHALL). This is what happens in sentences like "the child will laugh". The problem is that, in this case, the root verb LAUGH-lexicalized at step 1, is still at this point lacking a suffix. This drawback is avoided noticing that the entry WILL (or SHALL) generates an extra feature unit with predicate NFT ("non finite tense") which is analysed by the next step of the procedure.-STEP 6: lexicalizing NFT, if present The NFT unit is a feature unit and it is therefore never originally present in the input list: it comes out as the result of the lexicalization of WILL (or SHALL). Lexicalizing a NFT unit means generating in EM a suffix "-Ø" which is added to the root verb lexicalized at step I if EM contains a root verb without suffix (e.g. the child will laugh); otherwise it is associated to root verbs "be-" or "have-" generated by a LESS request (e.g. the child will be laughing). At this point the first block of LP, which lexicalizes the verb and the subject of the sentence, is completed. The second block starts, which is concerned with the other arguments of the sentence's declaration, if there are any. -STEP 7: lexicalizing additional arguments, if present LP checks the content of ARG_LIST. If ARG_LIST is not empty, it means that additional arguments are present in the input list. In our example X2 is still in the ARG_LIST: as a consequence step 2 is repeated for X2; all units referring to X2 are now considered, generating the root APPLE-, the suffix -S and the whole word THE. Since the input list is now empty, the LP for the sentence of fig. 1 is completed.-STEP 8: lexicalizing additional arguments of previous units, if present It may happen that one unit containing one argument of the sentence's declaration includes a further argument. In this case step 8 of LP lexicalizes, in whatever order, all the units referring to this argument. Examples: the child saw Mary's father, the linguists referred to the end of the sentence, and so on.Step 8 is, of course, recursive for the following two reasons: (1) there may be more than on further argument (e.g. Mary's love for music) (2) new units being lexicalized may have other arguments in their turn (e.g. Mary's father's brother).Step 8 concludes the second block of LP. The third block is concerned with the lexicalization of adverbials.-STEP 9: lexicalizing adverbials An adverbial unit is a unit that has CMAIN as argument and has not already been lexicalized during first block's steps. The list of fig. 5 , giving the sentence "the child is sleeping deeply", contains the adverbial unit (C6 (DEEP C2)), which is lexicalized with the entry DEEPLY.Step 9 is recursive since an input list can contain more adverbial units, as in the sentence "the child is sleeping deeply today". -STEP 10: lexicalizing additional arguments of adverbial units, if present An adverbial unit may have additional arguments besides CMAIN. Consider, for example, the unit (C6 (IN C2 X2)) of the list shown in fig. 6 .Step 10 is concerned with the lexicalization of the units in the list having these additional arguments. These units, in their turn, may contain other arguments (as in "the child sleeps with the mother's brother"); in such cases something similar to step 8 is executed at this point.The result of LP is the WORD MEMORY in which every element can be either a whole word (e.g. (C3 THE)), or a pair root-suffix (e.g. (C4 SEE--ED). A special step of LP transforms every pair root-suffix into a whole word according to a set of rules, typical of the language considered. In our example: ((C1 CHILD--Ø)(C3 THE)(C4 SEE--ED)(C7 APPLE--S)(C9 THE)) becomes ((C1 CHILD)(C3 THE)(C4 SAW)(C7 APPLES)(C9 THE))2. The ordering procedure The task of the ordering procedure is to order the words generated by the lexicalization procedure; at this purpose, also the original meaning representation is needed. OP is made up of two parts. The first part generates a fixed word order for each sentence type of the target language. This assumes that all languages have an intrinsic word order, independent of the fact that in some languages (e.g. English) sentences have a rather fixed word order, while in others (e.g. Italian) they have a more variable one. In this paper only this first part is considered, but OP has a second part which, starting from the fixed word order, gives as output the actual (in some languages variable) word order of the generated sentence. Whereas LP may be assumed to be "quasiuniversal", OP is language-specific or, at least, language-type-specific. In the following the ordering procedure for a specific language, i.e. English, is described. Basically, the OP examines the units in the MR in a certain order and, for each unit examined, it transfers the word linked to it from WM to a final workspace AC ("actual sentence"). The first move of OP is to find the CMAIN declaration in the sentence; OP identifies the argument of this unit that has a NOM control unit. If the input list includes a unit with this argument and DEF or UNDEF as predicate, OP moves from WM to AC the word linked to that unit, which becomes the first word of the actual sentence. In our example: WM: ((C1 CHILD)(C4 SAW)(C7 APPLES)(C9 THE)) AC: (THE) Then the other words referring to the argument are moved in AC.WM: ((C4 SAW)(C7 APPLES)(C9 THE)) AC: (THE CHILD)Then OP looks in input list for the FT predicate and moves in AC the corresponding word.The same operation is then performed for the predicates PERF and NFT. Then OP produces the CMAIN word if it has not yet been produced. This is the case, for example, of sentences like: the girl will laugh, the girl is nice ....... The next step is devoted to the other arguments of the sentence's declaration, starting with the argument having a ACC unit in its lexical entry (X2 in our example). The article is generated first and then the noun.Then the other declaration arguments are considered and, finally, the adverbial units.
null
In this way 1. is firstly selected; then 3. is selected and the whole word "gone" is generated; finally, 4. is analysed: with the root of the auxiliary "have-" produced during the process (see next section), gives the whole word "had". The same kind of problems are present also when dealing with irregular nouns like, for example, "child". The list of units of a lexical entry can be divided into semantic units (COGNI) and control units (FEAT). The semantic units are representative of the meaning of a word; the control units are not representative of the meaning but they take part in the production of the whole word starting from the MR. Some entries lack the semantic units, some other the control units. All the semantic and control units are collected respectively in the cogni-list and in the featlist; each lexical entry refers to the elements of these lists through pointers. The reason of the distinction between semantic and control units is due to the fact that semantic units are referring to the meaning of a lexical entry which is independent of the language considered; the control units are, on the contrary, specific of a particular language and allow the production of the sentences of that language. A lexical entry can contain a "lexicalization" request and some tests. As an example, the following lexical entry is now considered:(NICE ((MAIN CA) (COGNI (CA (P_NICE XA))) (FEAT (CB (XA MARK NOM)))) (LESS (MARK BE)) ) )The LESS request activates the lexicalization procedure which produces, in this case, the auxiliary "be"; in fact this lexical entry is used to produce sentences like:Bill is nice Bill has been nice• • • • •In some cases, it is necessary to introduce a test. Consider, for example, the entry shown in Fig. 2 , corresponding to the verb "to see". To choose the right units the tests have to be solved. If XA is the subject, then the following control units are selected:(XA (MARK NOM)) (XB (MARK ACC))If, on the contrary, the sentence is in the passive form (XB is subject), then the following control unit is chosen:(XB (MARK NOM)) and the procedure of lexicalization is called in order to produce the whole word "by" and the suffix "-ed". An example of general structure of a lexical entry with all its specifications is given in fig. 3 ; fig. 4 shows an example of use of the vocabulary. In the following a phrase structure grammar driving the construction of the vocabulary is presented. 3. THE ALGORITHM The production procedure is composed of two subprocedures: the lexicalization procedure (LP) and the ordering procedure (OP). The LP, starting from a meaning representation in the form previously described, produces the unordered set of words composing the final sentence. The ordered sequence of words is then produced by the OP on the basis of the LP results and of the original MR. The task of the ordering procedure is then to assign the correct sequential order to the words in the final sentence.In the previous sections a general algorithm for production of one-clause sentences has been presented. The algorithm is actually implemented in Franz Lisp on a VAX 11/750 and it is able to produce english sentences. An extension of the algorithm to multi-clause sentences is in progress. A multi-clause sentence is composed of a list of units including two or more CMAINs, each with its own declaration. The LP for multi-clause sentences must first identify the main CMAIN and apply to it the LP for on-clause sentences. During this application, the LP will encounter another CMAIN (a subordinate CMAIN). The procedure for the main CMAIN is then immediately interrupted and the LP for one-clause sentences starts all over again with reference to the subordinate CMAIN; when this task is completed, LP resumes the procedure for the main CMAIN at the point of interruption and brings it to completion. The same "interruptresume" mechanism can be used to extend to multi-clause sentences the ordering procedure.
The procedure described in this paper produces written sentences in a particular language starting from formal representations of their meaning (production procedure: PP). The PP accepts as its input meaning representations like, for example, the one shown in fig. 1 , which produces, as output, the english sentence:(THE CHILD SAW THE APPLE) Formally, a meaning representation (MR) is a list of units (semantic units) made up of a predicate (in the logical sense), one or more arguments and a label. Arguments are represented with Xs or Cs followed by a code number. Xs or Cs with the same code number refer to the same unit. Labels are represented with Cs followed by a code number, and they precede the "declaration" of the unit. This kind of MR representations is produced by a sentence comprehension procedure [8] which reads natural language sentences. In order to translate (lexicalise) a MR into words the procedure makes use of a vocabulary. The vocabulary is a list of "lexical entries"; each lexical entry is made up of a name and a meaning. The meaning is a list of units identical to the units of the MR except that literal codes replace number codes. Literal codes are introduced in the vocabulary to allow a general representation of the entries; these literal codes are then associated to number codes when a specific sentence is considered. The entries of the vocabulary do not correspond to "whole words" but to a set of abstract symbols: the whole words can then be obtained suitably combining the corresponding symbols.Consider, as an example, the past of the verb "to wash", "washed", which can be represented as:( (CA (P_WASH XA XB)) (CB (MAIN CA)) (CC (PAST CA)) )And the past of the verb "to go", "went", which can be represented as:( (CA (P_GO XA)) (CB (MAIN CA)) (CC (PAST CA)) )From the point of view of meanings, the two verbs have the same structure, but even if intuitively "washed" can be considered as a root "wash" plus a suffix "-ed", this is not true in the case of "went". In this case we can consider the whole word "went" as the merging between an abstract root and the abstract suffix of the past tense, exactly like "washed". Therefore the vocabulary is: The merging between 1. and 3. gives the whole word "went" and the merging between 2. and 3. gives the whole word "washed". If we assume, on the contrary, the presence of not regular words inside the vocabulary, then, starting from the MR:((C1 (P_GO X1)) (C2 (MAIN CD) (C3 (PAST CD) (C4 (PERF C1)) (C5 (P_MARY X1)) )which is representative of the sentence:(MARY HAD GONE) the system is not able to choose between: With our choice (totally morphological vocabulary) the vocabulary contains 1., 3. and:
null
Main paper: introduction: Production of sentences by computer has been approached for several years with the general goals of generating random sentences to test a grammatical theory or converting information from an internal representation into a natural language sentence. The first approach is more oriented toward theoretical linguistics than toward functional natural language processing systems. The objective of implementing a generation system of this sort is to test the descriptive adequacy of the test grammar [10, 2] . The second approach is to take some internal representation of the "meaning" of the sentence and to convert it into a surface-structure form, that is into an appropriate string of words (see, for example [4, 9, 6, 7, 3, 1] ).In this paper this second line has been followed and a procedure for the production of sentences is described producing written sentences in a particular language from formal representations of their meaning [5] . A characteristic of the meaning representations used is that they are "relatively" universal. In fact they bear no trace of the language into which they will be eventually mapped; it is the production procedure that imposes a language specific form on the final sentence produced. In principle, from a given meaning representation, sentences in a number of different languages can be produced, provided that the production procedure for each language is available. The meaning representations we are referring to in the paper do not involve any analysis of the meaning of the words. However, their format is compatible with lexical decomposition, and the procedure that we will describe accepts, without need for modification, lexically decomposed meaning representations.In the next section the internal representation used is briefly outlined and the structure of the vocabulary is described; in section 3 the algorithm is presented and, finally, in section 4 some results and future trends are discussed.The basic operations of LP are (a) identifying a particular unit in the input list (MR) (b) finding a lexical entry in the vocabulary that includes that unit as part of its meaning. The choice of the lexical entry is driven by a "principle of maximum overlap": if the vocabulary includes two or more entries having that unit as part of their meaning, LP selects the entry that shares the maximum number of units with the input list. Basically LP is a procedure specifying which lexicalizations have to be made and in which order; it is made up of three blocks of steps. The first block (steps 1-6) is concerned with the lexicalization of the sentence's predicate and subject. The second block (steps 7-8) lexicalizes any other additional argument which can be present. Finally, the third block (steps 9-10) concerns the lexicalization of adverbials, possibly included in the sentence. In the following each step is described. -STEP 1: lexicalizing the sentence's declaration All input lists contain one and only one unit with a C argument and the predicate MAIN; the C argument is called CMAIN. The first thing LP does is to identify the MAIN argument in the input-list (IL) and its "declaration", i.e. the unit that declares the content of the MAIN argument. In the list of fig. 1 (which will be considered throughout this section) the MAIN argument is C4 and it's declaration is the unit (C4 (P_SEE X1 X2)). The corresponding lexical entry (the root verb SEE-) is then selected from the vocabulary on the basis of the semantic units. In this entry a test on the subject is present (see fig. 2 ), which is immediately solved to select the right control units, making use of the unit (C10 (SUBJ X1)). LP verifies also if in the control units a LESS request is present (not in our example); if so, another entry is selected on the basis of the LESS arguments. The LESS request can be recursive, that is it can appear also in the new entry; the selection of entries continues until an entry without LESS request is found. The suffix of the verb is not lexicalized at this point of the procedure because the feature units needed to select it have not yet been analysed. In fact, to choose, for example, "-s" instead of "-<Z>" (zero morpheme), LP must know that the subject is in the third singular person. For this reason, the root SEE-is stored in a temporary memory called ENTRY_MEMORY (EM) together with the corresponding node C4, waiting for the right suffix to be selected. The arguments of the sentence's declaration are stored in a list called ARG_LIST (AL) and C4,C5,C10 units are deleted from the input list since they are not needed any more. At the end of step 1 the situation is as follows:IL: ((Cl (P_CHILD X1)) AL: (X1 X2) (C2 (ONE X1)) (C3 (DEF X1)) (C6 (PAST C4))EM: ((C4 SEE-)) (C7 (P_APPLE X2)) (C8 (MANY X2)) (C9 (DEF X2)) ) -STEP 2: lexicalizing the subject All the units referring to the subject of the sentence are selected from the input list; in our example they are (a) (C1 (P_CHILD X1)), (b) (C2 (ONE X1)), (c) (C3 (DEF X1)). One or more lexical entries are extracted from the vocabulary using the maximum overlap principle: firstly LP looks for an entry containing all three semantic units. If it does not find it (and this is what happens in our example), it looks for an entry presenting two of them. Since no such entry is present in the vocabulary three different entries are selected, one for each unit. The entry corresponding to unit (a) is CHILD-, which is a root; it is then added to EM. For unit (b), "-CD" is selected: it is also added to EM since it is a suffix. Unit (c) corresponds to the entry THE, which is a full voice; it is then stored elsewhere, in a special list called WORD_MEMORY (WM).The suffix "-Ø" is then associated to the root CHILD-: (C1 CHILD-) and (C2 -Ø) are deleted from EM and a new element (Cl CHILD--©) is added to WM. Moreover, the argument X1 is deleted from the ARG_LIST and units C1.C2.C3 are removed from the input list. Here is the situation at the end of step 2: IL: ((C6 (PAST C4)) EM: ((C4 SEE-)) (C7 (P_APPLE X2)) WM: ((C3 THE)(C1 CHILD--Ø)) (C8 (MANY X2)) AL: (X2) (C9 (DEF X2)) ) -STEP 3: lexicalizing PROGR, if present For some sentences a PROGR ("progressive") predicate is present in a unit of the input list. Examples of such sentences are: the child is laughing, the child is being nice, the child will be studying and so on. In all these cases, LP lexicalizes that unit at step 3, producing in EM the suffix "-ing". The suffix is associated in WM either to the root verb lexicalized at step 1 (e.g. the child is laughing) or to the root verb "be-" produced by a LESS request (e.g. the child is being nice). -STEP 4: lexicalizing PERF, if present This step is devoted to the lexicalization of the PERF predicate, if the input list contains it. Examples: the child has laughed, the child will have laughed, the child has been laughing, the child has been nice and so on. LP produces in EM the suffix "-ed" which is associated in WM either to the root verb lexicalized at step 1 (e.g. the child has laughed) or to the root verb "be-" produced by a LESS request (e.g. the child has been nice). -STEP 5: lexicalizing FT, if present All sentences include necessarily a unit with one of the following predicates: PRES ("present"), PAST ("past"), FUT ("future"). These three predicates are collectively called FT ("finite tense"). Lexicalization of PRES and PAST predicates produces in EM respectively a suffix "-s" or "-0" and "-ed" which is associated in WM to a root according to the rule already used for steps 3 and 4. In our example a PAST predicate is present, a suffix "-ed" is generated and added to EM. EM: ((C4 SEE-)(C4 -ED)) It is then associated to the root verb SEE-and stored in WM.The PAST unit is then removed from the input list.A FUT predicate is lexicalized with the whole word WILL (or SHALL). This is what happens in sentences like "the child will laugh". The problem is that, in this case, the root verb LAUGH-lexicalized at step 1, is still at this point lacking a suffix. This drawback is avoided noticing that the entry WILL (or SHALL) generates an extra feature unit with predicate NFT ("non finite tense") which is analysed by the next step of the procedure.-STEP 6: lexicalizing NFT, if present The NFT unit is a feature unit and it is therefore never originally present in the input list: it comes out as the result of the lexicalization of WILL (or SHALL). Lexicalizing a NFT unit means generating in EM a suffix "-Ø" which is added to the root verb lexicalized at step I if EM contains a root verb without suffix (e.g. the child will laugh); otherwise it is associated to root verbs "be-" or "have-" generated by a LESS request (e.g. the child will be laughing). At this point the first block of LP, which lexicalizes the verb and the subject of the sentence, is completed. The second block starts, which is concerned with the other arguments of the sentence's declaration, if there are any. -STEP 7: lexicalizing additional arguments, if present LP checks the content of ARG_LIST. If ARG_LIST is not empty, it means that additional arguments are present in the input list. In our example X2 is still in the ARG_LIST: as a consequence step 2 is repeated for X2; all units referring to X2 are now considered, generating the root APPLE-, the suffix -S and the whole word THE. Since the input list is now empty, the LP for the sentence of fig. 1 is completed.-STEP 8: lexicalizing additional arguments of previous units, if present It may happen that one unit containing one argument of the sentence's declaration includes a further argument. In this case step 8 of LP lexicalizes, in whatever order, all the units referring to this argument. Examples: the child saw Mary's father, the linguists referred to the end of the sentence, and so on.Step 8 is, of course, recursive for the following two reasons: (1) there may be more than on further argument (e.g. Mary's love for music) (2) new units being lexicalized may have other arguments in their turn (e.g. Mary's father's brother).Step 8 concludes the second block of LP. The third block is concerned with the lexicalization of adverbials.-STEP 9: lexicalizing adverbials An adverbial unit is a unit that has CMAIN as argument and has not already been lexicalized during first block's steps. The list of fig. 5 , giving the sentence "the child is sleeping deeply", contains the adverbial unit (C6 (DEEP C2)), which is lexicalized with the entry DEEPLY.Step 9 is recursive since an input list can contain more adverbial units, as in the sentence "the child is sleeping deeply today". -STEP 10: lexicalizing additional arguments of adverbial units, if present An adverbial unit may have additional arguments besides CMAIN. Consider, for example, the unit (C6 (IN C2 X2)) of the list shown in fig. 6 .Step 10 is concerned with the lexicalization of the units in the list having these additional arguments. These units, in their turn, may contain other arguments (as in "the child sleeps with the mother's brother"); in such cases something similar to step 8 is executed at this point.The result of LP is the WORD MEMORY in which every element can be either a whole word (e.g. (C3 THE)), or a pair root-suffix (e.g. (C4 SEE--ED). A special step of LP transforms every pair root-suffix into a whole word according to a set of rules, typical of the language considered. In our example: ((C1 CHILD--Ø)(C3 THE)(C4 SEE--ED)(C7 APPLE--S)(C9 THE)) becomes ((C1 CHILD)(C3 THE)(C4 SAW)(C7 APPLES)(C9 THE))2. The ordering procedure The task of the ordering procedure is to order the words generated by the lexicalization procedure; at this purpose, also the original meaning representation is needed. OP is made up of two parts. The first part generates a fixed word order for each sentence type of the target language. This assumes that all languages have an intrinsic word order, independent of the fact that in some languages (e.g. English) sentences have a rather fixed word order, while in others (e.g. Italian) they have a more variable one. In this paper only this first part is considered, but OP has a second part which, starting from the fixed word order, gives as output the actual (in some languages variable) word order of the generated sentence. Whereas LP may be assumed to be "quasiuniversal", OP is language-specific or, at least, language-type-specific. In the following the ordering procedure for a specific language, i.e. English, is described. Basically, the OP examines the units in the MR in a certain order and, for each unit examined, it transfers the word linked to it from WM to a final workspace AC ("actual sentence"). The first move of OP is to find the CMAIN declaration in the sentence; OP identifies the argument of this unit that has a NOM control unit. If the input list includes a unit with this argument and DEF or UNDEF as predicate, OP moves from WM to AC the word linked to that unit, which becomes the first word of the actual sentence. In our example: WM: ((C1 CHILD)(C4 SAW)(C7 APPLES)(C9 THE)) AC: (THE) Then the other words referring to the argument are moved in AC.WM: ((C4 SAW)(C7 APPLES)(C9 THE)) AC: (THE CHILD)Then OP looks in input list for the FT predicate and moves in AC the corresponding word.The same operation is then performed for the predicates PERF and NFT. Then OP produces the CMAIN word if it has not yet been produced. This is the case, for example, of sentences like: the girl will laugh, the girl is nice ....... The next step is devoted to the other arguments of the sentence's declaration, starting with the argument having a ACC unit in its lexical entry (X2 in our example). The article is generated first and then the noun.Then the other declaration arguments are considered and, finally, the adverbial units. internal representation and vocabulary: The procedure described in this paper produces written sentences in a particular language starting from formal representations of their meaning (production procedure: PP). The PP accepts as its input meaning representations like, for example, the one shown in fig. 1 , which produces, as output, the english sentence:(THE CHILD SAW THE APPLE) Formally, a meaning representation (MR) is a list of units (semantic units) made up of a predicate (in the logical sense), one or more arguments and a label. Arguments are represented with Xs or Cs followed by a code number. Xs or Cs with the same code number refer to the same unit. Labels are represented with Cs followed by a code number, and they precede the "declaration" of the unit. This kind of MR representations is produced by a sentence comprehension procedure [8] which reads natural language sentences. In order to translate (lexicalise) a MR into words the procedure makes use of a vocabulary. The vocabulary is a list of "lexical entries"; each lexical entry is made up of a name and a meaning. The meaning is a list of units identical to the units of the MR except that literal codes replace number codes. Literal codes are introduced in the vocabulary to allow a general representation of the entries; these literal codes are then associated to number codes when a specific sentence is considered. The entries of the vocabulary do not correspond to "whole words" but to a set of abstract symbols: the whole words can then be obtained suitably combining the corresponding symbols.Consider, as an example, the past of the verb "to wash", "washed", which can be represented as:( (CA (P_WASH XA XB)) (CB (MAIN CA)) (CC (PAST CA)) )And the past of the verb "to go", "went", which can be represented as:( (CA (P_GO XA)) (CB (MAIN CA)) (CC (PAST CA)) )From the point of view of meanings, the two verbs have the same structure, but even if intuitively "washed" can be considered as a root "wash" plus a suffix "-ed", this is not true in the case of "went". In this case we can consider the whole word "went" as the merging between an abstract root and the abstract suffix of the past tense, exactly like "washed". Therefore the vocabulary is: The merging between 1. and 3. gives the whole word "went" and the merging between 2. and 3. gives the whole word "washed". If we assume, on the contrary, the presence of not regular words inside the vocabulary, then, starting from the MR:((C1 (P_GO X1)) (C2 (MAIN CD) (C3 (PAST CD) (C4 (PERF C1)) (C5 (P_MARY X1)) )which is representative of the sentence:(MARY HAD GONE) the system is not able to choose between: With our choice (totally morphological vocabulary) the vocabulary contains 1., 3. and: (ca (perf cb)): In this way 1. is firstly selected; then 3. is selected and the whole word "gone" is generated; finally, 4. is analysed: with the root of the auxiliary "have-" produced during the process (see next section), gives the whole word "had". The same kind of problems are present also when dealing with irregular nouns like, for example, "child". The list of units of a lexical entry can be divided into semantic units (COGNI) and control units (FEAT). The semantic units are representative of the meaning of a word; the control units are not representative of the meaning but they take part in the production of the whole word starting from the MR. Some entries lack the semantic units, some other the control units. All the semantic and control units are collected respectively in the cogni-list and in the featlist; each lexical entry refers to the elements of these lists through pointers. The reason of the distinction between semantic and control units is due to the fact that semantic units are referring to the meaning of a lexical entry which is independent of the language considered; the control units are, on the contrary, specific of a particular language and allow the production of the sentences of that language. A lexical entry can contain a "lexicalization" request and some tests. As an example, the following lexical entry is now considered:(NICE ((MAIN CA) (COGNI (CA (P_NICE XA))) (FEAT (CB (XA MARK NOM)))) (LESS (MARK BE)) ) )The LESS request activates the lexicalization procedure which produces, in this case, the auxiliary "be"; in fact this lexical entry is used to produce sentences like:Bill is nice Bill has been nice• • • • •In some cases, it is necessary to introduce a test. Consider, for example, the entry shown in Fig. 2 , corresponding to the verb "to see". To choose the right units the tests have to be solved. If XA is the subject, then the following control units are selected:(XA (MARK NOM)) (XB (MARK ACC))If, on the contrary, the sentence is in the passive form (XB is subject), then the following control unit is chosen:(XB (MARK NOM)) and the procedure of lexicalization is called in order to produce the whole word "by" and the suffix "-ed". An example of general structure of a lexical entry with all its specifications is given in fig. 3 ; fig. 4 shows an example of use of the vocabulary. In the following a phrase structure grammar driving the construction of the vocabulary is presented. 3. THE ALGORITHM The production procedure is composed of two subprocedures: the lexicalization procedure (LP) and the ordering procedure (OP). The LP, starting from a meaning representation in the form previously described, produces the unordered set of words composing the final sentence. The ordered sequence of words is then produced by the OP on the basis of the LP results and of the original MR. The task of the ordering procedure is then to assign the correct sequential order to the words in the final sentence.In the previous sections a general algorithm for production of one-clause sentences has been presented. The algorithm is actually implemented in Franz Lisp on a VAX 11/750 and it is able to produce english sentences. An extension of the algorithm to multi-clause sentences is in progress. A multi-clause sentence is composed of a list of units including two or more CMAINs, each with its own declaration. The LP for multi-clause sentences must first identify the main CMAIN and apply to it the LP for on-clause sentences. During this application, the LP will encounter another CMAIN (a subordinate CMAIN). The procedure for the main CMAIN is then immediately interrupted and the LP for one-clause sentences starts all over again with reference to the subordinate CMAIN; when this task is completed, LP resumes the procedure for the main CMAIN at the point of interruption and brings it to completion. The same "interruptresume" mechanism can be used to extend to multi-clause sentences the ordering procedure. Appendix:
null
null
null
null
{ "paperhash": [ "shapiro|generalized_augmented_transition_network_grammars_for_generation_from_semantic_networks", "klein|automatic_paraphrasing_in_essay_format", "friedman|directed_random_generation_of_sentences" ], "title": [ "Generalized Augmented Transition Network Grammars for Generation from Semantic Networks", "Automatic paraphrasing in essay format", "Directed random generation of sentences" ], "abstract": [ "The augmented transition network (ATN) is a formalism for writing parsing grammars that has been much used in Artificial Intelligence and Computational Linguistics. A few researchers have also used ATNs for writing grammars for generating sentences. Previously, however, either generation ATNs did not have the same semantics as parsing ATNs, or they required an auxiliary mechanism to determine the syntactic structure of the sentence to be generated. This paper reports a generalization of the ATN formalism that allows ATN grammars to be written to parse labelled directed graphs. Specifically, an ATN grammar can be written to parse a semantic network and generate a surface string as its analysis. An example is given of a combined parsing-generating grammar that parses surface sentences, builds and queries a semantic network knowledge representation, and generates surface sentences in response.", "Abstract : This report describes an operating computer program that accepts as input an essay of up to 300 words in length, and yields as output an essay type paraphrase that is a nonredundant summary of the content of the source text. Although no transformations are used, the content of several sentences in the input text may be combined into a sentence in the output. The format of the output essay may be varied by adjustment of program parameters, and the system occasionally inserts subject or object pronouns in its paraphrases to avoid repetitious style.", "The problem of producing sentences of a transformational grammar by using a random generator to create phrase structure trees for input to the lexical insertion and transformational phases is discussed. A purely random generator will produce base trees which will be blocked by the transformations, and which are frequently too long to be of practical interest. A solution is offered in the form of a computer program which allows the user to constrain and direct the generation by the simple but powerful device of restricted subtrees. The program is a directed random generator which accepts as input a subtree with restrictions and produces around it a tree which satisfies the restrictions and is ready for the next phase of the grammar. The underlying linguistic model is that of Noam Chomsky, as presented in Aspects of the Theory of Syntax. The program is written in FORTRAN IV for the IBM 360/67 and is part of a unified computer system for transformational grammar. It is currently being used with several partial grammars of English." ], "authors": [ { "name": [ "S. Shapiro" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Sheldon Klein" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Friedman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null ], "s2_corpus_id": [ "4247142", "16180776", "14446450" ], "intents": [ [ "background" ], [ "background" ], [ "background" ] ], "isInfluential": [ false, false, false ] }
Problem: The paper addresses the production of sentences by computer from formal representations of their meaning in a particular language. Solution: The paper proposes a procedure for generating written sentences in a specific language from formal representations of their meaning, focusing on converting internal representations into natural language sentences.
492
0
null
null
null
null
null
null
null
null
8a297f01b52dd592b531778461acec3a4994381e
237558749
null
The difficulty of developing logical algorithms for the machine translation of natural language
In studying machine translation software design, computer experts and linguists have traditionally concentrated on a number of phenomena deemed to present special problems and thus require particular attention. Among the favourites in this connection are morphological analysis, prepositional dependencies and the establishment of antecedents. These and similar subjects have been dealt with at great length in the numerous papers written over the years to demonstrate the necessity of adding one or more specific processing features to the software under design or pilot development. Experience in the practical upgrading of operational systems has however tended to reveal a surprising variety of quite different problems and has shown that the fears of designers and theorists are frequently unfounded. Indeed, in tailoring a system for use by translators, many quite unexpected types of error emerge which, in the absence of sufficiently comprehensive studies, have to be eliminated largely on the basis of trial and error. The paper presents several examples of translation problems of this type and explains how difficult it can be to formalize their resolution in computer programs. Special reference is made to the English-French version of Systran, under development at the European Commission in Luxembourg. Explanations are given of the identification of error types, the human effort involved in their study, and the testing procedures used to check the validity of the action taken to reduce their occurrence in routine translation work. Finally, a number of suggestions are made for those working on design aspects of new systems in the hope that by paying less attention to problems which have already been solved, efforts can be concentrated on the specific areas which continue to cause frustration for those required to correct or use machine translations in practice.
{ "name": [ "Pigott, Ian M." ], "affiliation": [ null ] }
null
null
Proceedings of the International Conference on Methodology and Techniques of Machine Translation: Processing from words to language
1984-02-01
0
2
null
Despite the proliferation of operational machine translation systems in the last two or three years, the majority of linguistics and research departments working on the design of new systems continue to pay little or no attention to what has been achieved, preferring to propose totally innovative solutions to the problems considered to be of most significance.One of the favourite arguments used to justify the new approach is based on a recommendation made in the ALPAC report in 1966, namely that as high quality machine translation is not likely to be realized for several decades, efforts should be based on the development of less ambitious aids in the area.However, a great deal of water has flowed under the bridge since 1966. A number of extremely useful MT systems have been developed up to surprisingly high quality levels, levels which either provide a degree of intelligibility fully adapted to the use of raw machine translation for information scanning (as in the Russian-English system at the U.S. Air Force) or which produce machine output which can be post-edited by translators at rates of up to five pages per hour (as at the Commission or in the translation of equipment maintenance manuals at firms like Xerox).Systran is just one of half a dozen systems used on a day-to-day basis to give assistance to translators and end-users. In terms of quality, it is undoubtedly still in the lead but other systems are catching up quickly as increasingly high volumes of translation are channelled through them. I shall however use Systran as an illustration for my talk today, not only because we now have over eight years' experience of its development and use, but also because, much like the IBM mainframe computers, Systran which started off as a very modest system, has grown stage by stage on the basis of user requirements into a package containing well over 100,000 lines of macro-assembler programming for each language pair.Before going into detail, at this point it may be useful to list the areas where Systran has achieved a level of success which would be difficult to beat whatever new approach were to be used.On the morphology side, Systran is 100 per cent successful in identifying all the inflexional endings of verbs, nouns and adjectives in the source language and in re-establishing their equivalents for the target. The approach here differs somewhat according to the source language in question: for the less inflected languages such as English, full forms are automatically created from the stems and listed in the dictionary whereas for highly inflected languages like French, the endings are dynamically analysed by means of a table-driven algorithm.The all important problem of grammatical homograph resolution (i.e. deciding whether a word such as 'light' is in fact a noun, a verb or an adjective in a given context) is also handled surprisingly successfully by Systran. The large majority of the most frequently occurring homograph types are invariably correctly resolved (noun vs verb, adverb vs preposition, finite verb vs infinitive, etc.) and even in cases where errors do still occur (past tense vs past participle or adjective), the hit rate is well over 90%.Unfortunately, those working on the development of new systems do not appear to appreciate the importance of this aspect of MT analysis. Many seem to assume it is of relatively minor importance and can somehow be simply resolved by the establishment of semantic dependencies. This is certainly not the case and I would advise all those of you working on new systems to give this problem special attention from the very start.Dictionary structure is another feature of Systran which functions remarkably well. The Systran dictionaries provide for about fifteen different levels of coding ranging from basic one-word entries to highly complex multi-word contextual rules which can be powerful enough to override analysis algorithms if and when this proves to be necessary. Admittedly, months if not years of experience are required for a coder to make optimal use of these dictionary features. Nevertheless, the wide variety of dictionary support available is no accident. Each and every feature has its own special function and has been specifically developed to meet a given requirement.As a good dictionary is a prerequisite to high-quality MT, system developers would be recommended to design lexical data bases which not only provide a firm basis for the various levels of coding required but which can be updated at reasonable cost. It will be remembered that the TAUM system was discontinued mainly because of the excessive cost of increasing the size of the dictionary. TAUM entries used to cost up to $40 each while the cost per entry in other production systems, including Systran, is below $5.Target language generation in Systran also presents very few problems at this stage of development. In other words, provided the results of source language analysis are correct, the target synthesis and rearrangement processing rarely produces any surprises. Target generation has two main functions: the first is to inflect correctly (person, number, gender, case, etc.) all nouns, verbs and adjectives in the sentence, while the second is to place all the words into the correct order for the target language in question.Although this, like other developments, took time, people and money, the amount of effort was not nearly as great as might have been feared. The establishment of synthesis rules proved to be a relatively straightforward task, requiring no more than about three man-years for each new target language.I have, in recent years, seen several learned accounts of the difficulties of handling target generation. I can only say that in practice this level of processing, unlike analysis of source language, has turned out to be fairly mechanical, representing not more than about ten per cent of the effort required on any language combination. Those who continue to propose new approaches based on the 'special requirements' of the target language in question would be well advised to investigate the dependability of what has already been achieved.Finally, turning to source language analysis, which is certainly by far the most complicated part of MT, I would only say at this stage that Systran has indeed achieved a relatively high level of success. Most of the source-language sentences are satisfactorily parsed, even though some annoying errors still occur at times. I shall attempt to explain why they occur and how they could be eliminated later in my talk.My colleague, Peter Wheeler, also speaking at this conference has commented on the importance of meeting user requirements by a pragmatic approach to the problems in hand. I in turn should like to consider some of the linguistic problems which have caused real difficulty and explain how they were solved, before going on to argue why many of the fears of more theoretically oriented linguists working on MT developments are largely unfounded.Let us therefore go back to February 1976 when the Commission first started to develop the English-French Systran system. With a dictionary of only a few thousand words and only a fairly small program, there was ample opportunity for eliminating errors. Indeed, in those early days, practically every sentence contained a wide variety of mistakes at all levels, enough to persuade most of those assigned to the project to give up in despair.The errors, as might be expected, occurred at two main levelsdictionary and program.It was quickly realized that the performance of the program could not be properly judged until an adequate amount of basic vocabulary was available. The first priority was thus to create a well-coded dictionary for the text corpus on which we were working. This happened to be the Food Science and Technology Abstracts data base which had been chosen in view of the subject area involved (agriculture) and owing to the fact that it already contained thousands of pages of text in machine-readable form, thus eliminating the need to input source text manually.In this initial dictionary work, we made wide use of three types of tool, a key-word-in-context listing of all the words in a 20,000-sentence sample of text, word-frequency counts for the same sample and raw machine translations of about 1000 sentences a time complete with not-found-word lists.The raw MT served as a basis for deciding which words and expressions required coding while the KWIC and frequency listings provided an indication of the various contexts in which a given word was likely to appear, together with its more general frequency of occurrence. The normal practice was to go through the raw MT sentence-by-sentence, code up missing vocabulary, check against the not-found-word-list in order to avoid duplication and choose the most generally acceptable basic meaning making full use of the KWIC information.It may be thought that by working on a supposedly limited corpus of this type, problems of meaning could be avoided. In fact, this was seldom the case. We soon learnt that the data base covered all aspects of food science from farming to processing, from chemical testing to legislature and standards, and from biology to environmental pollution. As a result, it proved to be an excellent testbed for the work in hand.Very quickly we abandoned the idea of the topical glossary approach which is available in Systran and has been used to good effect by those who deal principally with one major sector of interest. This facility quite simply allows a basic meaning to be selected on the basis of a subject-field parameter in preference to other meanings a word might have in other contexts.There were two reasons why we decided not to use topical glossaries. On the one hand we soon discovered that constantly occurring terms such as 'plant' could, even in the field of food science have quite different meanings -either a growing vegetable organism (F -plante) or an industrial facility (F -installation). Surprisingly enough, it turned out that the second meaning was the more frequent even in this subject sector. The other reason was that although we based our initial development on the food sector, we realized from the very beginning that if MT was to be of real use at the Commission, it would have to be able to cope with practically any subject sector under the sun.A good illustration of a general purpose basic meaning is the translation assigned to the word 'station'. 'Gare' would have been too specific, 'station' although understandable would rarely be correct, and so we finally opted for 'poste' which is correct in most contexts and understandable in many more. 'Poste de chemin de fer' is not too bad a translation of 'railway station' but 'gare de telecommunications' would be quite unintelligible.This last example brings on to the next level of dictionary coding, the string expression, which can be assigned both a basic meaning and additional syntactic information. 'Railway station' can be given its own meaning 'gare' and coded in such a way that 'station' will in this context be systematically recognized as a noun rather than as a verb. I might add that as often as not, the use of common nouns such as 'station' as verbs is overlooked by the dictionary coder until the day when we get a sentence such as 'France has decided to station troops in Beirut' Many interesting pages could be written on expressions coding, but I would just like to mention here the usefulness of coding various types of string expression as a means of overcoming syntactic ambiguity. The phrase 'in order to' could theoretically have two quite different meanings, depending on whether 'order' is interpreted as a noun in its own right (as in 'He returned it in order to the owner') or simply as part of an infinitive particle expression of purpose. In practice of course, the infinitive particle occurrence is by far the most common and can be entered as the only possibility to be considered. I must admit, however, that although I once predicted the other meaning would never turn up, I have since been proved wrong, but in my opinion, 5000 hits to one miss is far better than 4800 hits and 200 misses even if it happened that one of the 4800 gave the correct resolution of the less common meaning. Indeed, I would say that it is just this kind of pragmatic approach which has been responsible for the success of our development.Once we had built up a reasonable dictionary, we were in a position to take a more objective look at the translation program. We could run new raw batches of MT and study the results of the sentences which, despite the fact that all the words were in the dictionary, still continued to produce errors.At this stage the errors fell into four basic types, those which resulted from insufficient information in the dictionary and which could normally be eliminated without too much difficulty, those resulting from poor homograph analysis which required much more work, those requiring extension to the various stages of the parsing routines which accounted for a substantial amount of effort over the first four years, and those which necessitated the introduction of contextual dictionary rules often in conjunction with semantic markers. I shall try to give you a typical example of each in order to illustrate what is bound to happen during the development of any system and how efforts can be made to solve the problems involved.As an illustration of lack of dictionary information, let us take the sentence:-Many institutions but particularly the Commission had been considered by the study.The raw MT might have come out:-Plusieurs institutions mais la Commission avait été particulièrement considerée par l'étude.The translation is of course quite wrong but at first sight, it is difficult to see why it has gone wrong. On checking the dictionary information, we find that everything appears to be correct and even the program seems to have functioned as it should. Only when we get a dump of the actual analysis do we find that 'particularly' has been marked as an adverb governing 'considered'. Had other adverbs been used (e.g. unfortunately the Commission had been considered by the study), the analysis would have been correct.The reason things went wrong was quite simply that while the system provided for codes to mark the affinity between an adverb and a verb or an adjective, no such code existed for an adverb governing a noun. Yet in our sentence 'particularly' is indeed governing a noun. Once we had diagnosed the trouble, we were able to add a new code, slightly modify the analysis program and obtain the correct translation:-Plusieurs institutions mais en particulier la Commission avaient été considerées par l'étude.Not purple prose perhaps, but at least syntactically correct and quite intelligible.The type of homograph error that might have occurred in the early stages can be illustrated by the translation of: -The laboratory analyses improved awareness of the problem. as:-Le laboratoire analyse la conscience ameliorée du problème.What has happened, of course, is that 'analyses' has been resolved as a verb with the result that 'improved' becomes a past participle adjective instead of a simple past tense. This kind of error is far more difficult to eliminate and while special cases such as this could be dealt with quite simply by entering 'laboratory analysis' in the dictionary as a noun phrase, a more general approach to the problem would require systematic study of hundreds, if not thousands of sentences of the same type.Such studies have indeed been conducted over the years and I am pleased to report that they have been largely successful. By and large, we attempt to design the program to make full use of the syntactic and semantic information available on each word in order to arrive at the most likely solution on the basis of a sizable error corpus. Once the program has been modified, similar but new material is run for checking, negative side effects are noted and further modifications are made. Slowly but surely performance increases. However, for some of the more common homograph types such as noun vs verb, a routine can easily run to thirty or forty pages of contextual programming.As an example of a typical parsing error, I would give the following example: -The committee discussed faulty equipment and office management.Early in development, this might have been translated:-Le comité a étudié l'équipement et l'administration de bureau défectueux.as the adjective 'faulty' would be analysed as qualifying both 'equipment' and 'management' rather than just 'equipment'. When we read a sentence such as this, there is absolutely no doubt in our minds that the 'faulty' refers only to 'equipment' and not to 'management', the reason being that we have all had plenty of experience of faulty equipment and we know only too well that management committees are unlikely to criticise management, however bad it may be.Yet the computer has no such inborn intelligence. The problem can however be solved on the basis of the most likely syntactic and semantic relationships between nouns and adjectives of given types. For example in -faulty typewriters and photocopiers 'faulty' would govern both nouns as they both come into the same semantic category (both are devices), whereas in our first example, it would qualify only 'equipment' which would not carry the same semantic markers as 'management'.Finally, to turn to the contextual dictionary entries, I will take a seemingly very simple example, the preposition 'in'. I may say that in practice the correct translation of the preposition 'in' has turned out to be one of our most difficult meaning problems. Indeed, there are some 550 contextual entries attached to this unassuming little word, not to speak of a series of special routines which deal with its translation in date structures and in connection with place names.The basic dictionary default for the translation of 'in' is 'dans' but there are a great many cases where that meaning is incorrect.Three simple examples will illustrate the point:'In' governing the name of an organization (on the basis of semantic coding) will be translated 'à' (à la Commission, aux Nations Unies).'In' immediately governing a material will be translated 'en' with no article (en acier, en bois).'In' immediately governing an animate noun will be translated 'chez' with a definite article (chez les hommes, chez les rats).These examples might appear childish in their simplicity, but as I said before, there are over 500 such codings of greater or lesser complexity and it has taken many man years of effort to cover them all adequately. No printed dictionary will give anything like a proper explanation of the variety of translations required. Only on the basis of working through thousands of pages of real text does the true extent of the problem become apparent, and only then can it be dealt with successfully.Now that we have looked at some typical practical problems in MT development, let us turn for a moment to some of the concerns about machine translation often expressed by academic linguists working at research centres or at the universities.Reams and reams have been written about prepositional dependency and its importance in machine translation. How important is the problem, how easy is it to resolve and to what extent can existing systems cope with it?First I would say that the establishment of prepositional dependencies is nowhere like as important as finding the correct translation of the prepositions once their relationships have been established. We have already seen the magnitude of the problem of dealing with the translation of 'in' but the same can be said of most common prepositions (at, to, with, by, for, etc.) .By contrast, the actual setting of relationships is a comparatively simple process, particularly as in the majority of cases in written texts, prepositions govern the noun phrase to the right and are rarely governed by the noun, verb or adjective to the left. Where special government does need to be handled, it can almost always be efficiently handled by appropriate contextual coding. It is also of interest to note here that translators seem far less concerned than might be expected by the occasional incorrect setting of prepositional dependencies resulting in the wrong translation.On the subject of antecedents, which again has been given considerable attention in academic circles, in practice we have found that a pronoun subject normally has as its antecedent, the noun which was the subject of the last main clause. Often, of course, the last main clause with a noun, rather than a pronoun, as subject is to be found in a previous sentence. In Systran, however, information about previous sentences is stored and the problem can be easily solved.There are of course exceptions to this general rule, some of which can be successfully handled by the program, some of which can not. And although there have been one or two complaints from translators, by and large the system appears to perform well. At the human level, this type of error is, in any case, among the easiest to correct.I have mentioned these two examples in the interest of restoring some kind of reasonable perspective regarding the problems to be solved in developing new MT systems. Indeed, on the basis of our experience to date, the real problems reside at quite different levels.Rather surprisingly perhaps, the ongoing quality enhancement of raw MT output at this stage seems to be more dictionary bound than program dependent. We do of course continue to enhance the programs, particularly in regard to the establishment of enumerations and the resolution of grammatical homographs and clause boundary setting, but most of our effort goes into the addition of contextual coding entries aimed at providing the translator with as much reliable technical terminology as possible. This is a long, rather tedious process, particularly in an institution like ours where there seems to be no end to the number and complexity of subject matter covered in day to day work. However, as post-editing work consists essentially of finding the 'mot juste' for each and every context, the better the terminology provided by the machine, the easier the job for the translator.I would therefore suggest that in the development of new systems, designers pay far more attention than they have in the past, to creating the best possible dictionary structures and terminology updating features, so as to minimize the human effort involved in dictionary improvement and expansion. Such a system could be based on the interfacing of source language analysis results with word processing systems used for post-editing. If this could be achieved, a dictionary coder would be able to avoid a great deal of analytical coding work as he would be able to make use of menu proposals as a basis for dictionary creation.For example, the post-editing changes made by a translator could be coupled to the source text to provide potential equivalences for inclusion in the dictionary. At the simplest level, these would be at the one-word level and would simply propose as a basic meaning, the equivalent inserted by the post-editor. For example, potential information on a not found word such as 'post-editor' could be created on the basis of its ending (e.g. noun, plural in -s, human, profession, etc.) and the meaning inserted by the translator (post-editeur).At the next level, expressions requiring post-editing corrections could be listed for approval as string technical terms. 'Word processing' could lead to information such as: noun expression, main meaning -traitement de textes.Finally, in order to facilitate the introduction of the correct meaning of a term in context, the updating algorithm could make use of the parsing information available from analysis in a given text and propose selection criteria for the correct contextual equivalent. As an example, let us take the sentence:-The equipment appeared to work successfully under normal operating conditions.In the absence of contextual meanings, the verb 'work' might well have been assigned its basic meaning 'travailler' rather than the post-editor's choice 'fonctionner'. The updating proposals, based on the sentence structure, might look something like this:-WORK, meaning 'fonctionner' when: * a. semantic subject = DEVice * b. noun subject = equipment * c. modified by adverb = successfully * d. used intransitively * e. dependent on modal = appearThe translator would then simply select one or more of the proposals as a basis for contextual meaning selection. Here he might choose a) and c), or, if he wished to be more specific, he might only select b). Following his selection, which would only take seconds, the necessary syntactic and semantic information would be correctly formatted into a dictionary rule. After updating, which ideally should be immediate for each entry, any conflicts with existing information would be displayed, allowing the coder to make any additional changes with or without the assistance of his colleagues.This kind of computerized aid to dictionary making would certainly be a tremendous aid to the coder and would make for cheap, rapid dictionary building. If this could be ensured, the overall cost of MT improvement would be drastically cut. And once the algorithm had been successfully developed for dictionaries alone, it could possibly be extended to interface with the program itself as an aid to program enhancement.And this brings me to my last and most important point.In our experience, the most time-consuming part of development work has been related to the difficulty experienced by linguists in reducing natural language to logical processing at the programming and dictionary levels. At all levels of the system, translators, linguist-programmers and systems experts have constantly had difficultly in predicting the overall consequences of an addition or modification to the program or dictionaries.More often than not, it has been necessary to make use of the only remaining tool available, that of trial and error. In language processing there are indeed very few, if any, hard and fast rules. This has meant that each basic rule has led to a general series of exceptions, each of these has in turn led to more specific exceptions and so on, down the line. Even the seemingly most straightforward rules such as 'a pronoun immediately preceding a finite verb is the subject of that verb' can lead to dozens of general exceptions and hundreds of specific ones.It has taken us over eight years to create software packages of well over 100,000 lines of assembler programming and some 150,000 dictionary entries for each language pair. The capacity of the computer has, by the way, not proved to be a problem, nor has running time or even running cost. The basic problem has been one of training translators and linguists to reduce their knowhow to logical patterns of thought which can be programmed into the computer, tested, amended, added to, further tested and so on until reliable results can be obtained.The work continues year by year and as the quality of the raw MT improves, so the number of enthusiastic users expands.However, with more and more users of MT -and remember that over 400,000 pages of MT were run in 1983 on the various production systems now in use -the problem of successfully integrating user feedback becomes ever more complex.My conclusion today is, then, a very simple one.I would urge all those concerned with the basic design or further improvement of MT systems to concentrate their efforts towards assisting dictionary makers and programmers in their routine work.Rather than coming up with new revolutionary theories based on artificial intelligence or Chomskian linguistics, they should attempt to find ways and means of prompting the human beings concerned with development into making the best possible use of their knowledge and experience by providing them with efficient updating features enabling them to make full use of the feedback received from users.Finally, let us never forget that MT systems are intended to help overcome language barriers by speeding up the rate at which translators can handle their work. Translators' requirements are thus of paramount importance and should be borne in mind at every stage of design and development.
null
null
null
null
Main paper: introduction: Despite the proliferation of operational machine translation systems in the last two or three years, the majority of linguistics and research departments working on the design of new systems continue to pay little or no attention to what has been achieved, preferring to propose totally innovative solutions to the problems considered to be of most significance.One of the favourite arguments used to justify the new approach is based on a recommendation made in the ALPAC report in 1966, namely that as high quality machine translation is not likely to be realized for several decades, efforts should be based on the development of less ambitious aids in the area.However, a great deal of water has flowed under the bridge since 1966. A number of extremely useful MT systems have been developed up to surprisingly high quality levels, levels which either provide a degree of intelligibility fully adapted to the use of raw machine translation for information scanning (as in the Russian-English system at the U.S. Air Force) or which produce machine output which can be post-edited by translators at rates of up to five pages per hour (as at the Commission or in the translation of equipment maintenance manuals at firms like Xerox).Systran is just one of half a dozen systems used on a day-to-day basis to give assistance to translators and end-users. In terms of quality, it is undoubtedly still in the lead but other systems are catching up quickly as increasingly high volumes of translation are channelled through them. I shall however use Systran as an illustration for my talk today, not only because we now have over eight years' experience of its development and use, but also because, much like the IBM mainframe computers, Systran which started off as a very modest system, has grown stage by stage on the basis of user requirements into a package containing well over 100,000 lines of macro-assembler programming for each language pair.Before going into detail, at this point it may be useful to list the areas where Systran has achieved a level of success which would be difficult to beat whatever new approach were to be used.On the morphology side, Systran is 100 per cent successful in identifying all the inflexional endings of verbs, nouns and adjectives in the source language and in re-establishing their equivalents for the target. The approach here differs somewhat according to the source language in question: for the less inflected languages such as English, full forms are automatically created from the stems and listed in the dictionary whereas for highly inflected languages like French, the endings are dynamically analysed by means of a table-driven algorithm.The all important problem of grammatical homograph resolution (i.e. deciding whether a word such as 'light' is in fact a noun, a verb or an adjective in a given context) is also handled surprisingly successfully by Systran. The large majority of the most frequently occurring homograph types are invariably correctly resolved (noun vs verb, adverb vs preposition, finite verb vs infinitive, etc.) and even in cases where errors do still occur (past tense vs past participle or adjective), the hit rate is well over 90%.Unfortunately, those working on the development of new systems do not appear to appreciate the importance of this aspect of MT analysis. Many seem to assume it is of relatively minor importance and can somehow be simply resolved by the establishment of semantic dependencies. This is certainly not the case and I would advise all those of you working on new systems to give this problem special attention from the very start.Dictionary structure is another feature of Systran which functions remarkably well. The Systran dictionaries provide for about fifteen different levels of coding ranging from basic one-word entries to highly complex multi-word contextual rules which can be powerful enough to override analysis algorithms if and when this proves to be necessary. Admittedly, months if not years of experience are required for a coder to make optimal use of these dictionary features. Nevertheless, the wide variety of dictionary support available is no accident. Each and every feature has its own special function and has been specifically developed to meet a given requirement.As a good dictionary is a prerequisite to high-quality MT, system developers would be recommended to design lexical data bases which not only provide a firm basis for the various levels of coding required but which can be updated at reasonable cost. It will be remembered that the TAUM system was discontinued mainly because of the excessive cost of increasing the size of the dictionary. TAUM entries used to cost up to $40 each while the cost per entry in other production systems, including Systran, is below $5.Target language generation in Systran also presents very few problems at this stage of development. In other words, provided the results of source language analysis are correct, the target synthesis and rearrangement processing rarely produces any surprises. Target generation has two main functions: the first is to inflect correctly (person, number, gender, case, etc.) all nouns, verbs and adjectives in the sentence, while the second is to place all the words into the correct order for the target language in question.Although this, like other developments, took time, people and money, the amount of effort was not nearly as great as might have been feared. The establishment of synthesis rules proved to be a relatively straightforward task, requiring no more than about three man-years for each new target language.I have, in recent years, seen several learned accounts of the difficulties of handling target generation. I can only say that in practice this level of processing, unlike analysis of source language, has turned out to be fairly mechanical, representing not more than about ten per cent of the effort required on any language combination. Those who continue to propose new approaches based on the 'special requirements' of the target language in question would be well advised to investigate the dependability of what has already been achieved.Finally, turning to source language analysis, which is certainly by far the most complicated part of MT, I would only say at this stage that Systran has indeed achieved a relatively high level of success. Most of the source-language sentences are satisfactorily parsed, even though some annoying errors still occur at times. I shall attempt to explain why they occur and how they could be eliminated later in my talk.My colleague, Peter Wheeler, also speaking at this conference has commented on the importance of meeting user requirements by a pragmatic approach to the problems in hand. I in turn should like to consider some of the linguistic problems which have caused real difficulty and explain how they were solved, before going on to argue why many of the fears of more theoretically oriented linguists working on MT developments are largely unfounded.Let us therefore go back to February 1976 when the Commission first started to develop the English-French Systran system. With a dictionary of only a few thousand words and only a fairly small program, there was ample opportunity for eliminating errors. Indeed, in those early days, practically every sentence contained a wide variety of mistakes at all levels, enough to persuade most of those assigned to the project to give up in despair.The errors, as might be expected, occurred at two main levelsdictionary and program.It was quickly realized that the performance of the program could not be properly judged until an adequate amount of basic vocabulary was available. The first priority was thus to create a well-coded dictionary for the text corpus on which we were working. This happened to be the Food Science and Technology Abstracts data base which had been chosen in view of the subject area involved (agriculture) and owing to the fact that it already contained thousands of pages of text in machine-readable form, thus eliminating the need to input source text manually.In this initial dictionary work, we made wide use of three types of tool, a key-word-in-context listing of all the words in a 20,000-sentence sample of text, word-frequency counts for the same sample and raw machine translations of about 1000 sentences a time complete with not-found-word lists.The raw MT served as a basis for deciding which words and expressions required coding while the KWIC and frequency listings provided an indication of the various contexts in which a given word was likely to appear, together with its more general frequency of occurrence. The normal practice was to go through the raw MT sentence-by-sentence, code up missing vocabulary, check against the not-found-word-list in order to avoid duplication and choose the most generally acceptable basic meaning making full use of the KWIC information.It may be thought that by working on a supposedly limited corpus of this type, problems of meaning could be avoided. In fact, this was seldom the case. We soon learnt that the data base covered all aspects of food science from farming to processing, from chemical testing to legislature and standards, and from biology to environmental pollution. As a result, it proved to be an excellent testbed for the work in hand.Very quickly we abandoned the idea of the topical glossary approach which is available in Systran and has been used to good effect by those who deal principally with one major sector of interest. This facility quite simply allows a basic meaning to be selected on the basis of a subject-field parameter in preference to other meanings a word might have in other contexts.There were two reasons why we decided not to use topical glossaries. On the one hand we soon discovered that constantly occurring terms such as 'plant' could, even in the field of food science have quite different meanings -either a growing vegetable organism (F -plante) or an industrial facility (F -installation). Surprisingly enough, it turned out that the second meaning was the more frequent even in this subject sector. The other reason was that although we based our initial development on the food sector, we realized from the very beginning that if MT was to be of real use at the Commission, it would have to be able to cope with practically any subject sector under the sun.A good illustration of a general purpose basic meaning is the translation assigned to the word 'station'. 'Gare' would have been too specific, 'station' although understandable would rarely be correct, and so we finally opted for 'poste' which is correct in most contexts and understandable in many more. 'Poste de chemin de fer' is not too bad a translation of 'railway station' but 'gare de telecommunications' would be quite unintelligible.This last example brings on to the next level of dictionary coding, the string expression, which can be assigned both a basic meaning and additional syntactic information. 'Railway station' can be given its own meaning 'gare' and coded in such a way that 'station' will in this context be systematically recognized as a noun rather than as a verb. I might add that as often as not, the use of common nouns such as 'station' as verbs is overlooked by the dictionary coder until the day when we get a sentence such as 'France has decided to station troops in Beirut' Many interesting pages could be written on expressions coding, but I would just like to mention here the usefulness of coding various types of string expression as a means of overcoming syntactic ambiguity. The phrase 'in order to' could theoretically have two quite different meanings, depending on whether 'order' is interpreted as a noun in its own right (as in 'He returned it in order to the owner') or simply as part of an infinitive particle expression of purpose. In practice of course, the infinitive particle occurrence is by far the most common and can be entered as the only possibility to be considered. I must admit, however, that although I once predicted the other meaning would never turn up, I have since been proved wrong, but in my opinion, 5000 hits to one miss is far better than 4800 hits and 200 misses even if it happened that one of the 4800 gave the correct resolution of the less common meaning. Indeed, I would say that it is just this kind of pragmatic approach which has been responsible for the success of our development.Once we had built up a reasonable dictionary, we were in a position to take a more objective look at the translation program. We could run new raw batches of MT and study the results of the sentences which, despite the fact that all the words were in the dictionary, still continued to produce errors.At this stage the errors fell into four basic types, those which resulted from insufficient information in the dictionary and which could normally be eliminated without too much difficulty, those resulting from poor homograph analysis which required much more work, those requiring extension to the various stages of the parsing routines which accounted for a substantial amount of effort over the first four years, and those which necessitated the introduction of contextual dictionary rules often in conjunction with semantic markers. I shall try to give you a typical example of each in order to illustrate what is bound to happen during the development of any system and how efforts can be made to solve the problems involved.As an illustration of lack of dictionary information, let us take the sentence:-Many institutions but particularly the Commission had been considered by the study.The raw MT might have come out:-Plusieurs institutions mais la Commission avait été particulièrement considerée par l'étude.The translation is of course quite wrong but at first sight, it is difficult to see why it has gone wrong. On checking the dictionary information, we find that everything appears to be correct and even the program seems to have functioned as it should. Only when we get a dump of the actual analysis do we find that 'particularly' has been marked as an adverb governing 'considered'. Had other adverbs been used (e.g. unfortunately the Commission had been considered by the study), the analysis would have been correct.The reason things went wrong was quite simply that while the system provided for codes to mark the affinity between an adverb and a verb or an adjective, no such code existed for an adverb governing a noun. Yet in our sentence 'particularly' is indeed governing a noun. Once we had diagnosed the trouble, we were able to add a new code, slightly modify the analysis program and obtain the correct translation:-Plusieurs institutions mais en particulier la Commission avaient été considerées par l'étude.Not purple prose perhaps, but at least syntactically correct and quite intelligible.The type of homograph error that might have occurred in the early stages can be illustrated by the translation of: -The laboratory analyses improved awareness of the problem. as:-Le laboratoire analyse la conscience ameliorée du problème.What has happened, of course, is that 'analyses' has been resolved as a verb with the result that 'improved' becomes a past participle adjective instead of a simple past tense. This kind of error is far more difficult to eliminate and while special cases such as this could be dealt with quite simply by entering 'laboratory analysis' in the dictionary as a noun phrase, a more general approach to the problem would require systematic study of hundreds, if not thousands of sentences of the same type.Such studies have indeed been conducted over the years and I am pleased to report that they have been largely successful. By and large, we attempt to design the program to make full use of the syntactic and semantic information available on each word in order to arrive at the most likely solution on the basis of a sizable error corpus. Once the program has been modified, similar but new material is run for checking, negative side effects are noted and further modifications are made. Slowly but surely performance increases. However, for some of the more common homograph types such as noun vs verb, a routine can easily run to thirty or forty pages of contextual programming.As an example of a typical parsing error, I would give the following example: -The committee discussed faulty equipment and office management.Early in development, this might have been translated:-Le comité a étudié l'équipement et l'administration de bureau défectueux.as the adjective 'faulty' would be analysed as qualifying both 'equipment' and 'management' rather than just 'equipment'. When we read a sentence such as this, there is absolutely no doubt in our minds that the 'faulty' refers only to 'equipment' and not to 'management', the reason being that we have all had plenty of experience of faulty equipment and we know only too well that management committees are unlikely to criticise management, however bad it may be.Yet the computer has no such inborn intelligence. The problem can however be solved on the basis of the most likely syntactic and semantic relationships between nouns and adjectives of given types. For example in -faulty typewriters and photocopiers 'faulty' would govern both nouns as they both come into the same semantic category (both are devices), whereas in our first example, it would qualify only 'equipment' which would not carry the same semantic markers as 'management'.Finally, to turn to the contextual dictionary entries, I will take a seemingly very simple example, the preposition 'in'. I may say that in practice the correct translation of the preposition 'in' has turned out to be one of our most difficult meaning problems. Indeed, there are some 550 contextual entries attached to this unassuming little word, not to speak of a series of special routines which deal with its translation in date structures and in connection with place names.The basic dictionary default for the translation of 'in' is 'dans' but there are a great many cases where that meaning is incorrect.Three simple examples will illustrate the point:'In' governing the name of an organization (on the basis of semantic coding) will be translated 'à' (à la Commission, aux Nations Unies).'In' immediately governing a material will be translated 'en' with no article (en acier, en bois).'In' immediately governing an animate noun will be translated 'chez' with a definite article (chez les hommes, chez les rats).These examples might appear childish in their simplicity, but as I said before, there are over 500 such codings of greater or lesser complexity and it has taken many man years of effort to cover them all adequately. No printed dictionary will give anything like a proper explanation of the variety of translations required. Only on the basis of working through thousands of pages of real text does the true extent of the problem become apparent, and only then can it be dealt with successfully.Now that we have looked at some typical practical problems in MT development, let us turn for a moment to some of the concerns about machine translation often expressed by academic linguists working at research centres or at the universities.Reams and reams have been written about prepositional dependency and its importance in machine translation. How important is the problem, how easy is it to resolve and to what extent can existing systems cope with it?First I would say that the establishment of prepositional dependencies is nowhere like as important as finding the correct translation of the prepositions once their relationships have been established. We have already seen the magnitude of the problem of dealing with the translation of 'in' but the same can be said of most common prepositions (at, to, with, by, for, etc.) .By contrast, the actual setting of relationships is a comparatively simple process, particularly as in the majority of cases in written texts, prepositions govern the noun phrase to the right and are rarely governed by the noun, verb or adjective to the left. Where special government does need to be handled, it can almost always be efficiently handled by appropriate contextual coding. It is also of interest to note here that translators seem far less concerned than might be expected by the occasional incorrect setting of prepositional dependencies resulting in the wrong translation.On the subject of antecedents, which again has been given considerable attention in academic circles, in practice we have found that a pronoun subject normally has as its antecedent, the noun which was the subject of the last main clause. Often, of course, the last main clause with a noun, rather than a pronoun, as subject is to be found in a previous sentence. In Systran, however, information about previous sentences is stored and the problem can be easily solved.There are of course exceptions to this general rule, some of which can be successfully handled by the program, some of which can not. And although there have been one or two complaints from translators, by and large the system appears to perform well. At the human level, this type of error is, in any case, among the easiest to correct.I have mentioned these two examples in the interest of restoring some kind of reasonable perspective regarding the problems to be solved in developing new MT systems. Indeed, on the basis of our experience to date, the real problems reside at quite different levels.Rather surprisingly perhaps, the ongoing quality enhancement of raw MT output at this stage seems to be more dictionary bound than program dependent. We do of course continue to enhance the programs, particularly in regard to the establishment of enumerations and the resolution of grammatical homographs and clause boundary setting, but most of our effort goes into the addition of contextual coding entries aimed at providing the translator with as much reliable technical terminology as possible. This is a long, rather tedious process, particularly in an institution like ours where there seems to be no end to the number and complexity of subject matter covered in day to day work. However, as post-editing work consists essentially of finding the 'mot juste' for each and every context, the better the terminology provided by the machine, the easier the job for the translator.I would therefore suggest that in the development of new systems, designers pay far more attention than they have in the past, to creating the best possible dictionary structures and terminology updating features, so as to minimize the human effort involved in dictionary improvement and expansion. Such a system could be based on the interfacing of source language analysis results with word processing systems used for post-editing. If this could be achieved, a dictionary coder would be able to avoid a great deal of analytical coding work as he would be able to make use of menu proposals as a basis for dictionary creation.For example, the post-editing changes made by a translator could be coupled to the source text to provide potential equivalences for inclusion in the dictionary. At the simplest level, these would be at the one-word level and would simply propose as a basic meaning, the equivalent inserted by the post-editor. For example, potential information on a not found word such as 'post-editor' could be created on the basis of its ending (e.g. noun, plural in -s, human, profession, etc.) and the meaning inserted by the translator (post-editeur).At the next level, expressions requiring post-editing corrections could be listed for approval as string technical terms. 'Word processing' could lead to information such as: noun expression, main meaning -traitement de textes.Finally, in order to facilitate the introduction of the correct meaning of a term in context, the updating algorithm could make use of the parsing information available from analysis in a given text and propose selection criteria for the correct contextual equivalent. As an example, let us take the sentence:-The equipment appeared to work successfully under normal operating conditions.In the absence of contextual meanings, the verb 'work' might well have been assigned its basic meaning 'travailler' rather than the post-editor's choice 'fonctionner'. The updating proposals, based on the sentence structure, might look something like this:-WORK, meaning 'fonctionner' when: * a. semantic subject = DEVice * b. noun subject = equipment * c. modified by adverb = successfully * d. used intransitively * e. dependent on modal = appearThe translator would then simply select one or more of the proposals as a basis for contextual meaning selection. Here he might choose a) and c), or, if he wished to be more specific, he might only select b). Following his selection, which would only take seconds, the necessary syntactic and semantic information would be correctly formatted into a dictionary rule. After updating, which ideally should be immediate for each entry, any conflicts with existing information would be displayed, allowing the coder to make any additional changes with or without the assistance of his colleagues.This kind of computerized aid to dictionary making would certainly be a tremendous aid to the coder and would make for cheap, rapid dictionary building. If this could be ensured, the overall cost of MT improvement would be drastically cut. And once the algorithm had been successfully developed for dictionaries alone, it could possibly be extended to interface with the program itself as an aid to program enhancement.And this brings me to my last and most important point.In our experience, the most time-consuming part of development work has been related to the difficulty experienced by linguists in reducing natural language to logical processing at the programming and dictionary levels. At all levels of the system, translators, linguist-programmers and systems experts have constantly had difficultly in predicting the overall consequences of an addition or modification to the program or dictionaries.More often than not, it has been necessary to make use of the only remaining tool available, that of trial and error. In language processing there are indeed very few, if any, hard and fast rules. This has meant that each basic rule has led to a general series of exceptions, each of these has in turn led to more specific exceptions and so on, down the line. Even the seemingly most straightforward rules such as 'a pronoun immediately preceding a finite verb is the subject of that verb' can lead to dozens of general exceptions and hundreds of specific ones.It has taken us over eight years to create software packages of well over 100,000 lines of assembler programming and some 150,000 dictionary entries for each language pair. The capacity of the computer has, by the way, not proved to be a problem, nor has running time or even running cost. The basic problem has been one of training translators and linguists to reduce their knowhow to logical patterns of thought which can be programmed into the computer, tested, amended, added to, further tested and so on until reliable results can be obtained.The work continues year by year and as the quality of the raw MT improves, so the number of enthusiastic users expands.However, with more and more users of MT -and remember that over 400,000 pages of MT were run in 1983 on the various production systems now in use -the problem of successfully integrating user feedback becomes ever more complex.My conclusion today is, then, a very simple one.I would urge all those concerned with the basic design or further improvement of MT systems to concentrate their efforts towards assisting dictionary makers and programmers in their routine work.Rather than coming up with new revolutionary theories based on artificial intelligence or Chomskian linguistics, they should attempt to find ways and means of prompting the human beings concerned with development into making the best possible use of their knowledge and experience by providing them with efficient updating features enabling them to make full use of the feedback received from users.Finally, let us never forget that MT systems are intended to help overcome language barriers by speeding up the rate at which translators can handle their work. Translators' requirements are thus of paramount importance and should be borne in mind at every stage of design and development. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
492
0.004065
null
null
null
null
null
null
null
null
afb4088b0a96fbc0377beaa282b690dd9078359f
36258304
null
Control and data structures in the {MT} system {SUSY}-{E}
The MT system SUSY-E which has been developed since 1972 in the Sonderforschungsbereich "Elektronische Sprachforschung" of the University of the Saar can be divided into three major subsystems: background, dictionary and kernel systems. The background system represents the interface to implementers, linguists and users. The dictionary system supports the construction and maintenance of the different dictionaries and provides the description of the dictionary entries. The proper translation processes are carried out by the use of the kernel systems containing the linguistic knowledge in different representational schemes and allowing for syntactico-semantic analysis and generation of texts. The most elaborate kernel system of SUSY-E is SUSY which has been constantly developed and tested in the past ten years. Apart from SUSY there exist several new "prototypes" which in their architecture show considerable differences between themselves and especially with regard to SUSY. These new approaches are called SUSY-II systems.
{ "name": [ "Maas, Heinz Dieter" ], "affiliation": [ null ] }
null
null
Proceedings of the International Conference on Methodology and Techniques of Machine Translation: Processing from words to language
1984-02-01
0
0
null
null
null
null
The different variants of SUSY-II are based on a common data structure, the so-called S-graph, which essentially is a chart. By defining dominance and neighbourship relations it is possible to represent the sequence of constituents of phrases as well as their internal structure (in the form of labelled trees).In contrast to SUSY's data structure (which is organized as a network, but has difficulties in representing sequences of constituents) SUSY-II operates exclusively on trees and sequences of trees -at least from the linguist's point of view. An important advantage of the S-graph is the possibility to represent naturally lexical and structural ambiguity. Moreover, the S-graph is the basic structure of all subparts of SUSY-IT, whereas in SUSY a heterogeneous set of data structures is used.An even more important difference between the kernel systems exists with respect to control structures. In SUSY, the control over the analysis modules is totally programmed and therefore in principle unchangeable. Only minor changes can be achieved by parametrizing rules or sets of rules or by switching off whole modules. In SUSY-II we have created the possibility of describing the control over all analysis operations by the use of a special formal language. In this way the analysis can easily be adapted to special text types (e.g. instructions, headlines etc.).In constructing a SUSY-II control structure we will distinguish the following elements of the control language: rules, operators, and modules.Rules: They contain the elementary linguistic knowledge. The left hand side of a rule is always a sequence of 1-4 tree structures. If this description matches the actual data structure, the rule delivers normally one new tree. All these rules are programmed. They do not consider any context or competing structures, and are therefore much simpler than SUSY rules.Operators: Each operator names exactly one rule, together with the conditions under which this rule should be applied.Left and right context can be specified, as well as the mode of application of the rule: substitution and addition.An operator can be iterative: in this case it will be applied as long as it produces changes in the data structures.Modules: Each module names a sequence of modules or operators.It can be stated under which circumstances the module should work, and whether it is iterative. A sequencing parameter allows the specification of three different modes of processing of the submodule sequence: a.preferential: the n-th process stops, when its preceding submodule returns a result (n≠l). b.stratificational: the n-th submodule will be activated only if the (n-l)th has delivered a result (n≠l) c.unconditional: the submodules are applied in sequence.The control language provides the linguist a comfortable tool for the description of his analysis process by specifying a control tree whose nodes are modules (non-terminals) and operators (terminals) . Apart from the control structure, the user has to define a formal description of the possible content of the nodes of the analysis trees. These properties are related to the conditions stated within the modules and operators. This description is used for the "compilation" of the control tree, which results in a compact control structure that can be interpreted by the SUSY-II software system in a comfortable way.The advantages of the SUSY-II variant which allows for separate definition of the control mechanism consist in an increased flexibility in constructing analysis processes and an easily readable documentation of its architecture. As compared to SUSY, SUSY-II is certainly less efficient as far as runtime is concerned. The main reason for this disadvantage, however, is not the flexible control structure definition, but the necessity of using additive (i.e. non-deterministic) operators.
null
Main paper: : The different variants of SUSY-II are based on a common data structure, the so-called S-graph, which essentially is a chart. By defining dominance and neighbourship relations it is possible to represent the sequence of constituents of phrases as well as their internal structure (in the form of labelled trees).In contrast to SUSY's data structure (which is organized as a network, but has difficulties in representing sequences of constituents) SUSY-II operates exclusively on trees and sequences of trees -at least from the linguist's point of view. An important advantage of the S-graph is the possibility to represent naturally lexical and structural ambiguity. Moreover, the S-graph is the basic structure of all subparts of SUSY-IT, whereas in SUSY a heterogeneous set of data structures is used.An even more important difference between the kernel systems exists with respect to control structures. In SUSY, the control over the analysis modules is totally programmed and therefore in principle unchangeable. Only minor changes can be achieved by parametrizing rules or sets of rules or by switching off whole modules. In SUSY-II we have created the possibility of describing the control over all analysis operations by the use of a special formal language. In this way the analysis can easily be adapted to special text types (e.g. instructions, headlines etc.).In constructing a SUSY-II control structure we will distinguish the following elements of the control language: rules, operators, and modules.Rules: They contain the elementary linguistic knowledge. The left hand side of a rule is always a sequence of 1-4 tree structures. If this description matches the actual data structure, the rule delivers normally one new tree. All these rules are programmed. They do not consider any context or competing structures, and are therefore much simpler than SUSY rules.Operators: Each operator names exactly one rule, together with the conditions under which this rule should be applied.Left and right context can be specified, as well as the mode of application of the rule: substitution and addition.An operator can be iterative: in this case it will be applied as long as it produces changes in the data structures.Modules: Each module names a sequence of modules or operators.It can be stated under which circumstances the module should work, and whether it is iterative. A sequencing parameter allows the specification of three different modes of processing of the submodule sequence: a.preferential: the n-th process stops, when its preceding submodule returns a result (n≠l). b.stratificational: the n-th submodule will be activated only if the (n-l)th has delivered a result (n≠l) c.unconditional: the submodules are applied in sequence.The control language provides the linguist a comfortable tool for the description of his analysis process by specifying a control tree whose nodes are modules (non-terminals) and operators (terminals) . Apart from the control structure, the user has to define a formal description of the possible content of the nodes of the analysis trees. These properties are related to the conditions stated within the modules and operators. This description is used for the "compilation" of the control tree, which results in a compact control structure that can be interpreted by the SUSY-II software system in a comfortable way.The advantages of the SUSY-II variant which allows for separate definition of the control mechanism consist in an increased flexibility in constructing analysis processes and an easily readable documentation of its architecture. As compared to SUSY, SUSY-II is certainly less efficient as far as runtime is concerned. The main reason for this disadvantage, however, is not the flexible control structure definition, but the necessity of using additive (i.e. non-deterministic) operators. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
492
0
null
null
null
null
null
null
null
null
bb9372c315c33658013f69e2dadad72480f5772e
237558771
null
Machine translation with post editing versus a three-level integrated translator aid system
The standard design for a computer-assisted translation system consists of data entry of source text, machine translation, and post editing (i.e. revision) of raw machine translation. This paper discusses this standard design and presents an alternative three-level design consisting of word processing integrated with terminology aids, simple source text processing, and a link to an off-line machine translation system. Advantages of the new design are discussed. THE STANDARD DESIGN FOR A COMPUTER-ASSISTED TRANSLATION SYSTEM.
{ "name": [ "Melby, Alan K." ], "affiliation": [ null ] }
null
null
Proceedings of the International Conference on Methodology and Techniques of Machine Translation: Processing from words to language
1984-02-01
5
3
null
The standard design for a computer-assisted translation system consists of three phases: (A) data entry of the source text, (B) machine translation of the text, and (C) human post editing of the raw machine translation. Most machine translation projects of the past thirty years have used this design without questioning its validity, yet it may not be optimal. This section will discuss this design and some possible objections to it.The data entry phase may be trivial if the source text is available in machine-readable form already or can be optically scanned, or it may involve considerable overhead if the text must be entered on a keyboard and proofread.The actual machine translation is usually of the whole text. That is, the system is generally designed to produce some output for each sentence of the source text. Given current analysis systems, some sentences will not receive a full syntactic and semantic analysis, and so there will be a considerable variation in the quality of the output from sentence to sentence.Also, there may be several possible translations for a given word, even within the same grammatical category and subject matter, which may result in the system choosing one of the translations arbitrarily. That choice will, of course, sometimes be inappropriate. It is well-known that for these and other reasons, a machine translation of a whole text is usually of rather uneven quality. There is an alternative to translating the whole text -namely, "selective translation", a notion which will be discussed further later on.Revision of the raw machine translation by a human translator seems at first to be an attractive way to compensate for whatever errors may occur in the raw machine translation. However, revision is effective only if the raw translation is already nearly acceptable. Brinkmann (1980) concluded that even if only 20% of the text needs revision, it is better to translate from scratch instead of revising.The author worked on a system with this standard design for a whole decade (from 1970 to 1980) . This design can, of course, work very well. The author's major objection is that the machine translation must be almost perfect or the system is nearly useless. In other words, the system does not become progressively • more useful as the output improves from being 50% correct to 60% to 70% to 80% to 90%. Instead, the system is nearly useless as the output improves and passes some threshold of quality. Then, all of a sudden, it becomes very useful. It would, of course, be preferable to work with a design which allows the system to become progressively more useful.Here is a summary of objections to the standard design:Because even if the algorithms start out "clean", they must be "kludged" to make sure that something comes out for every sentence that goes in.Because they feel that they are tools of the system instead of artists using tools.Because the system has to be worked on for a long time and be almost perfect before it can be determined whether or not any useful result will be obtained.There has been for some time a real alternative to the standard design -namely, translator aids. These translator aids have been principally terminology aids of various kinds and some use of standard word processing. These aids have been found to be clearly useful. However, they have not attracted the attention of computational linguists because they do not involve any really interesting or challenging linguistic processing. This is not to say that they are trivial. It is, in fact, quite difficult to perfect a reliable, user-friendly word processor or a robust, easy to use automated dictionary, especially if they must both be simultaneously visible on the same screen. But the challenge is more in the area of computer science and engineering than in computational linguistics.Until now, there has not been much real integration of work in machine translation and translator aids. This paper is a proposal for a system design which allows just such an integration. The proposed system consists of two pieces of hardware: (1) a translator work station (probably a single-user microcomputer) and (2) a "selective" machine translation system (probably running on a more powerful computer and serving multiple users). The translator work station is a three-level system of aids. All three levels look much the same to the translator. At each level, the translator works at the same keyboard and screen. The display is divided into two major windows. One window (which we will call the output window) contains a portion of the translated text. It is a human work area, and nothing goes in it except what the translator puts there. The other window (which we will call the input window) contains various helps such as dictionary entries, segments of source text, or suggested translations.To the translator, the difference between the various levels is simply the nature of the helps that appear in the input window; and the translator in all cases produces the translation a segment at a time in the output window. Internally, however, the three levels are vastly different. Level 1 is the lowest level of aid to the translator. At this level, there is no need for data entry of the source text. The translator can sit down with a source text on paper and begin translating immediately. The system at this level includes word processing of the target text, access to a local terminology file, and communications either with remote data bases of documents and terminology or with other translators. Level 2 is an intermediate level at which the source text must be available in machine readable form. It can be entered remotely and supplied to the translator (e.g. on a diskette) or it can be entered at the translator work station by a clerk. Level 2 provides all the aids available at level 1 and two additional helps -(a) preprocessing of the source text to search for unusual or misspelled terms, etc., and (b) dynamic processing of the source text as it is translated. The translator sees in the input window the current segment of text to be translated and suggested translations of selected words and phrases found by automatically identifying the words of the current segment of source text and looking them up in the bilingual dictionary that can be accessed manually in level 1.Level 3 requires a separate machine translation system and an interface to it. Instead of supplying just the source text to the translator work station, the work station receives (on diskette or through a network) the source text and (for each segment of source text) either a machine translation of the segment or an indication of the reason for failure of the machine translation system on that segment. This explains the notion of "selective" machine translation referred to previously. A selective machine translation system does not attempt to translate every segment of text. It contains a formal model of language which may or may not accept a given segment of source text. If a given segment fails in analysis, transfer, or generation, a reason is given. If no failure occurs, a machine translation of that segment is produced and a problem record is attached to the segment indicating difficulties encountered, such as words missing from the dictionaries and arbitrary syntactic and lexical choices made by the system.Level 3 provides to the translator all the aids of levels 1 and 2. In addition, the translator has the option of specifying a maximum acceptable problem level, called a tolerance level. When a segment of source text is displayed, if the machine translation of that segment has a problem level which is low enough, the machine translation of that segment will be displayed along with the source text, instead of the level 2 suggestions. The translator can examine the machine translation of a given segment and, if it is judged to be good enough by the translator, pull it into the output window with a keystroke or two and revise it as needed. If, on the other hand, a segment of machine translation slips through the problem check and yet is not worth revising, the translator can simply ignore it or request a level 2 display.Note that writing a selective machine translation system need not mean starting from scratch. It should be possible to take any existing machine translation system and modify it to be a selective translation system. And translator work stations can provide valuable feedback to the machine translation * development team by recording which segments of machine translation were seen by the translator and whether they were used and if so how revised. The standard design for a machine translation system and the alternative multi-level design just described use essentially the same components. They both involve data entry of the source text (although the data entry is needed only at levels 2 and 3 in the multi-level design). They both involve machine translation (although the machine translation is needed only at level 3 in the multi-level design) . And they both involve interaction with a human translator. In the standard design, this interaction consists of human revision of the raw machine translation. In the multilevel design, this interaction consists of both revision and human translation in which the human uses word processing, terminology lookup, and suggested translations from the computer. At one extreme (level 1), the multi-level system involves no machine translation at all, and the system is little more than an integrated word processor and terminology file. At the other extreme (level 3), the multi-level system could act much the same as the standard design. If every sentence of the source text received a machine translation with a low problem count and high quality, then the translation could conceivably be produced by the translator choosing to pull each segment of translated text into the output window and revise it as needed. The difference between the two designs becomes apparent only when the raw machine translation is not almost perfect. In that case, which is of course common, the multi-level system continues to produce translations with the human translator translating more segments using level 1 and level 2 aids instead of level 3 aids; the translation process continues with some loss of speed but no major difficulty. When the same raw machine translation is placed in a standard design context, the translator is expected to revise if in spite of the problems, and according to the author's experience, the translators tend to become frustrated and unhappy with their work. Both designs use the same components but put them together differently. See Figure 1 .Here is a summary of the arguments for a multi-level design:Because they can set up a "clean" formal model and keep it clean, because there is no undue pressure to produce a translation for every sentence that goes in.Because the system is truly a tool for the translator. The translator is never pressured to revise the machine output. Of course, if the raw machine translation of a sentence is very good and needs only a minor change or two, the translator will naturally pull it into the output window and revise it because that is so much faster and easier than translating from scratch.Because the system is useful after a modest investment in level 1. Then level 2 is added and the system becomes more useful. While the system is being used at levels 1 and 2, level 3 is developed and the machine translation system can become a useful component of the multi-level system when only a small fraction of the source sentences receive a good machine translation. Thus, there is a measurable result obtained from each increment of investment.The multi-level design grew out of a Naval Research Laboratory workshop the summer of 1981, a paper on translator aids by Martin Kay (1980) , and user reaction to a translator aid system (called a "Suggestion Box" aid) was tested on a seminar of translators fall 1981. A demonstration prototype including all three levels, with simulated machine translation being used at level 3, was completed and tested by another seminar of translators fall 1982. A commercial version is currently under development on an 8086/8088 microcomputer, written in C under the PC-DOS operating system.A project has recently been approved by the US NSF and the French CNRS to use ARIANE-78.4 as the level 3 machine translation component for a multi-level translator work station (See Boitet 1982) . Further papers will discuss the successes and disappointments of a multi-level translation system.
null
null
null
null
Main paper: : The standard design for a computer-assisted translation system consists of three phases: (A) data entry of the source text, (B) machine translation of the text, and (C) human post editing of the raw machine translation. Most machine translation projects of the past thirty years have used this design without questioning its validity, yet it may not be optimal. This section will discuss this design and some possible objections to it.The data entry phase may be trivial if the source text is available in machine-readable form already or can be optically scanned, or it may involve considerable overhead if the text must be entered on a keyboard and proofread.The actual machine translation is usually of the whole text. That is, the system is generally designed to produce some output for each sentence of the source text. Given current analysis systems, some sentences will not receive a full syntactic and semantic analysis, and so there will be a considerable variation in the quality of the output from sentence to sentence.Also, there may be several possible translations for a given word, even within the same grammatical category and subject matter, which may result in the system choosing one of the translations arbitrarily. That choice will, of course, sometimes be inappropriate. It is well-known that for these and other reasons, a machine translation of a whole text is usually of rather uneven quality. There is an alternative to translating the whole text -namely, "selective translation", a notion which will be discussed further later on.Revision of the raw machine translation by a human translator seems at first to be an attractive way to compensate for whatever errors may occur in the raw machine translation. However, revision is effective only if the raw translation is already nearly acceptable. Brinkmann (1980) concluded that even if only 20% of the text needs revision, it is better to translate from scratch instead of revising.The author worked on a system with this standard design for a whole decade (from 1970 to 1980) . This design can, of course, work very well. The author's major objection is that the machine translation must be almost perfect or the system is nearly useless. In other words, the system does not become progressively • more useful as the output improves from being 50% correct to 60% to 70% to 80% to 90%. Instead, the system is nearly useless as the output improves and passes some threshold of quality. Then, all of a sudden, it becomes very useful. It would, of course, be preferable to work with a design which allows the system to become progressively more useful.Here is a summary of objections to the standard design:Because even if the algorithms start out "clean", they must be "kludged" to make sure that something comes out for every sentence that goes in.Because they feel that they are tools of the system instead of artists using tools.Because the system has to be worked on for a long time and be almost perfect before it can be determined whether or not any useful result will be obtained.There has been for some time a real alternative to the standard design -namely, translator aids. These translator aids have been principally terminology aids of various kinds and some use of standard word processing. These aids have been found to be clearly useful. However, they have not attracted the attention of computational linguists because they do not involve any really interesting or challenging linguistic processing. This is not to say that they are trivial. It is, in fact, quite difficult to perfect a reliable, user-friendly word processor or a robust, easy to use automated dictionary, especially if they must both be simultaneously visible on the same screen. But the challenge is more in the area of computer science and engineering than in computational linguistics.Until now, there has not been much real integration of work in machine translation and translator aids. This paper is a proposal for a system design which allows just such an integration. The proposed system consists of two pieces of hardware: (1) a translator work station (probably a single-user microcomputer) and (2) a "selective" machine translation system (probably running on a more powerful computer and serving multiple users). The translator work station is a three-level system of aids. All three levels look much the same to the translator. At each level, the translator works at the same keyboard and screen. The display is divided into two major windows. One window (which we will call the output window) contains a portion of the translated text. It is a human work area, and nothing goes in it except what the translator puts there. The other window (which we will call the input window) contains various helps such as dictionary entries, segments of source text, or suggested translations.To the translator, the difference between the various levels is simply the nature of the helps that appear in the input window; and the translator in all cases produces the translation a segment at a time in the output window. Internally, however, the three levels are vastly different. Level 1 is the lowest level of aid to the translator. At this level, there is no need for data entry of the source text. The translator can sit down with a source text on paper and begin translating immediately. The system at this level includes word processing of the target text, access to a local terminology file, and communications either with remote data bases of documents and terminology or with other translators. Level 2 is an intermediate level at which the source text must be available in machine readable form. It can be entered remotely and supplied to the translator (e.g. on a diskette) or it can be entered at the translator work station by a clerk. Level 2 provides all the aids available at level 1 and two additional helps -(a) preprocessing of the source text to search for unusual or misspelled terms, etc., and (b) dynamic processing of the source text as it is translated. The translator sees in the input window the current segment of text to be translated and suggested translations of selected words and phrases found by automatically identifying the words of the current segment of source text and looking them up in the bilingual dictionary that can be accessed manually in level 1.Level 3 requires a separate machine translation system and an interface to it. Instead of supplying just the source text to the translator work station, the work station receives (on diskette or through a network) the source text and (for each segment of source text) either a machine translation of the segment or an indication of the reason for failure of the machine translation system on that segment. This explains the notion of "selective" machine translation referred to previously. A selective machine translation system does not attempt to translate every segment of text. It contains a formal model of language which may or may not accept a given segment of source text. If a given segment fails in analysis, transfer, or generation, a reason is given. If no failure occurs, a machine translation of that segment is produced and a problem record is attached to the segment indicating difficulties encountered, such as words missing from the dictionaries and arbitrary syntactic and lexical choices made by the system.Level 3 provides to the translator all the aids of levels 1 and 2. In addition, the translator has the option of specifying a maximum acceptable problem level, called a tolerance level. When a segment of source text is displayed, if the machine translation of that segment has a problem level which is low enough, the machine translation of that segment will be displayed along with the source text, instead of the level 2 suggestions. The translator can examine the machine translation of a given segment and, if it is judged to be good enough by the translator, pull it into the output window with a keystroke or two and revise it as needed. If, on the other hand, a segment of machine translation slips through the problem check and yet is not worth revising, the translator can simply ignore it or request a level 2 display.Note that writing a selective machine translation system need not mean starting from scratch. It should be possible to take any existing machine translation system and modify it to be a selective translation system. And translator work stations can provide valuable feedback to the machine translation * development team by recording which segments of machine translation were seen by the translator and whether they were used and if so how revised. The standard design for a machine translation system and the alternative multi-level design just described use essentially the same components. They both involve data entry of the source text (although the data entry is needed only at levels 2 and 3 in the multi-level design). They both involve machine translation (although the machine translation is needed only at level 3 in the multi-level design) . And they both involve interaction with a human translator. In the standard design, this interaction consists of human revision of the raw machine translation. In the multilevel design, this interaction consists of both revision and human translation in which the human uses word processing, terminology lookup, and suggested translations from the computer. At one extreme (level 1), the multi-level system involves no machine translation at all, and the system is little more than an integrated word processor and terminology file. At the other extreme (level 3), the multi-level system could act much the same as the standard design. If every sentence of the source text received a machine translation with a low problem count and high quality, then the translation could conceivably be produced by the translator choosing to pull each segment of translated text into the output window and revise it as needed. The difference between the two designs becomes apparent only when the raw machine translation is not almost perfect. In that case, which is of course common, the multi-level system continues to produce translations with the human translator translating more segments using level 1 and level 2 aids instead of level 3 aids; the translation process continues with some loss of speed but no major difficulty. When the same raw machine translation is placed in a standard design context, the translator is expected to revise if in spite of the problems, and according to the author's experience, the translators tend to become frustrated and unhappy with their work. Both designs use the same components but put them together differently. See Figure 1 .Here is a summary of the arguments for a multi-level design:Because they can set up a "clean" formal model and keep it clean, because there is no undue pressure to produce a translation for every sentence that goes in.Because the system is truly a tool for the translator. The translator is never pressured to revise the machine output. Of course, if the raw machine translation of a sentence is very good and needs only a minor change or two, the translator will naturally pull it into the output window and revise it because that is so much faster and easier than translating from scratch.Because the system is useful after a modest investment in level 1. Then level 2 is added and the system becomes more useful. While the system is being used at levels 1 and 2, level 3 is developed and the machine translation system can become a useful component of the multi-level system when only a small fraction of the source sentences receive a good machine translation. Thus, there is a measurable result obtained from each increment of investment.The multi-level design grew out of a Naval Research Laboratory workshop the summer of 1981, a paper on translator aids by Martin Kay (1980) , and user reaction to a translator aid system (called a "Suggestion Box" aid) was tested on a seminar of translators fall 1981. A demonstration prototype including all three levels, with simulated machine translation being used at level 3, was completed and tested by another seminar of translators fall 1982. A commercial version is currently under development on an 8086/8088 microcomputer, written in C under the PC-DOS operating system.A project has recently been approved by the US NSF and the French CNRS to use ARIANE-78.4 as the level 3 machine translation component for a multi-level translator work station (See Boitet 1982) . Further papers will discuss the successes and disappointments of a multi-level translation system. Appendix:
null
null
null
null
{ "paperhash": [ "boitet|implementation_and_conversational_environment_of_ariane_78.4,_an_integrated_system_for_automated_translation_and_human_revision", "melby|multi-level_translation_aids_in_a_distributed_system", "brinkmann|terminology_data_banks_as_a_basis_for_high-quality_translation" ], "title": [ "Implementation and Conversational Environment of ARIANE 78.4, An Integrated System for Automated Translation and Human Revision", "Multi-Level Translation Aids in a Distributed System", "Terminology Data Banks as a Basis for High-Quality Translation" ], "abstract": [ "ARIANE-78.4 is a computer system designed to o f fe r an adequate environment for construct ing machine t rans la t ion programs, for running them, and for (humanly) rev is ing the rough t rans la t ions produced by the computer. ARIANE-78 has been operat iona l at GETA for more than 4 years now. This paper refers to version 4. I t has been used for a number of appl icat ions (russian and japanese, engl ish to french and malay, portuguese to engl ish) and has constant ly been amended to meet the needs of the users. Parts of th is system have been presented before [ 2 ,3 ,7 ,8 ] , but i t s whole has only been described in in ternal technical documents.", "At COLING80, we reported on an Interactive Translation System called ITS. We will discuss three problems in the design of the first version of ITS: (1) human factors, (2) the \"all or nothing\" syndrome, and (3) traditional centralized processing. We will also discuss a new version of ITS, which is now being programmed. This new version will hopefully overcome these problems by placing the translator in control, providing multiple levels of aid, and distributing the processing.", "Currently existing terminology data banks serve various purposes. Two major groups, i.e. standardization-oriented and translation-oriented terminology data banks are of special significance. This paper deals exclusively with translation-oriented banks and uses as an example the TEAM terminology data bank system developed by the Language Services Department of SIEMENS." ], "authors": [ { "name": [ "C. Boitet", "P. Guillaume", "M. Quezel-Ambrunaz" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "A. Melby" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Karl-Heinz Brinkmann" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null ], "s2_corpus_id": [ "7252947", "8310536", "37327977" ], "intents": [ [], [], [] ], "isInfluential": [ false, false, false ] }
- Problem: The standard design for a computer-assisted translation system, involving data entry of the source text, machine translation, and human post-editing, may not be optimal due to issues with the quality of machine translation and the effectiveness of human revision. - Solution: The paper proposes an alternative three-level design for a computer-assisted translation system, integrating word processing with terminology aids, simple source text processing, and a link to an off-line machine translation system, aiming to provide a more progressive and useful translation process.
492
0.006098
null
null
null
null
null
null
null
null
5c077b5547de25439dc588b95ec8f878d32d43dd
28311432
null
Development of {E}nglish-{S}panish machine translation
PAHO) has been involved in the field of machine translation (MT) since 1976. Its Spanish-English machine translation system (SPANAM) became operational in 1980. The SPANAM system is described in Tucker, Vasconcellos, and León (1980) and Vasconcellos (in press). In 1982, work began on the development of the counterpart English-Spanish system (ENGSPAN). An experimental version of the translation program was in place by October of that year. In August 1983, PAHO was awarded a research grant by the U.S. Agency for International Development (AID) to provide additional support for the ENGSPAN project. This paper describes the approach which is being used for the design and implementation of ENGSPAN. Current Working Environment The MT project is staffed by four full-time employees. The head of the project is responsible for the management of both production translation and software development, as well as for the coordination of terminology. A posteditor handles all Spanish-English translation and updates the SPANAM dictionaries. The author is responsible for the design and implementation of ENGSPAN and the maintenance of the SPANAM programs and support software. A second computational linguist, funded through the AID grant, is also working on ENGSPAN. Outside consultants evaluated the feasibility of the project and have also participated in some phases of the development work. The MT system is installed on an IBM 4341 mainframe computer operating under DOS/VSE. The project is assigned a partition of 512K, although our largest program only requires a total of 400K. The programs are written in PL/1. The dictionaries are VSAM files stored on a permanently-mounted disk. Both translation and dictionary updating is done in batch mode. For production translations, the source text is transmitted from the word processor (Wang OIS 140) to the mainframe via telecommunications, and the output is returned to the word processor for postediting. Dictionary updates, tests, and demonstrations can be submitted from either the word processor or the computer terminal. Program development is done from the terminal. The turnaround time depends on the level of use of the computer at the time the job is submitted. Under optimum conditions, SPANAM can process about 700 words per minute of elapsed time. The CPU time ranges from 2,600 to 3,200 words per minute. Translations and dictionary updates can be submitted at any time during the day. Of course, longer jobs running during off-peak hours are the most efficient. The time required for postediting depends on the purpose for which the translation was requested. A polished translation can usually be produced at a rate of about 800-1000 words per hour.
{ "name": [ "Le{\\'o}n, Marjorie" ], "affiliation": [ null ] }
null
null
Proceedings of the International Conference on Methodology and Techniques of Machine Translation: Processing from words to language
1984-02-01
3
2
null
null
null
null
As an organization, PAHO is involved in three different aspects of MT. It is the software developer, the user of the system, and the end-user of the translation. The system developers (linguists and programmers) and the system users (posteditors and dictionary coders) are members of the same team. In fact, everyone on the project staff has some experience in postediting, dictionary coding, and programming. This working environment makes the development staff keenly aware of the needs and desires of those using the system-both from personal experience and from listening to daily feedback. In turn, the posteditor has an understanding of how the algorithm works and can appreciate the relative complexity of the problems encountered.While the developers are mainly concerned with the linguistic content of the programs, the operational environment is also kept in mind. A recent case in point involved the format in which the side-by-side output was received on the word processor. Before postediting could begin, a time-consuming glossary had to be run in order to remove the source text, unwanted format lines, spaces, and hard carriage returns. This problem was solved by expanding the output module to create a second file containing only the target translation with the necessary Wang control characters and format lines and no unwanted carriage returns. The same translation run can now produce both a target-only document on the word processor and a side-by-side document either on the Wang or the IBM (terminal and/or printer). An extra step in the production cycle was eliminated and the turnaround time improved.ENGSPAN is being designed to produce Spanish translations of English texts. It is language-pair specific, but not subject-area specific. The input will not be restricted to any particular sublanguage or discipline, nor can it require pre-editing or the use of restricted syntax. The algorithm is being designed with expository text (both technical and general) in mind, but provisions will also be made for other types of text whenever possible.Our goal is to produce high-quality raw output which requires only a limited amount of postediting to produce a finished translation. While the quality of the raw output is our main concern, ease of operation is also an important consideration. Dictionary updating should be mnemonic and the user should be required to supply only those codes which cannot be computed from other information already available to the system. The procedures for submitting translations, dictionary updates, dictionary backups, etc. should also be simple. Finally, the system should be efficient in its use of storage space and processing time. When we reach a satisfactory level of quality, ease of operation, and efficiency, we plan to adapt the system to run on a microcomputer. This will make low-cost machine translation available to the PAHO Country Offices and Pan American Centers and to other cooperating institutions in the Member Countries.An important part of our development strategy is the use of an experimental corpus. The corpus contains over 50,000 running words, taken from texts by different authors and dealing with a variety of health-related topics. It is large enough that it contains examples of a wide range of syntactic and semantic phenomena, yet at the same time it provides us with objective data on the relative frequency of occurrence of different types of constructions. We intend to concentrate our efforts on the types of syntax found most frequently in the corpus.The system uses separate files for the source and target dictionaries. The records in both files have a fixed length of 160 bytes. The source entry is linked to its target gloss by means of a 12-digit lexical number (LEX). The first six digits of the LEX are the unique identification number which is assigned to each pair when it is added to the dictionary. The second half of the LEX is used to specify alternate target glosses associated with the same source entry. The main or default target gloss for each pair has zeroes in these positions.The key for a source entry is the lexical item itself, which may be up to 30 characters in length. The source dictionary is arranged alphabetically. The key for a target entry is the LEX, and the target dictionary is arranged in numerical order.Words may be entered in the source dictionary either with or without inflectional endings. Host nouns are entered only in the singular and adjectives only in the masculine singular. Verbs are entered as stems. Full-form entries are required for words with highly irregular morphology and for homographs (words which can function as more than one part of speech).Several source items may be linked to the same target gloss by assigning it the same LEX. For example, irregular forms of the same verb or alternate spellings of a word require only one entry in the target dictionary. Likewise, more than one target gloss can be linked to the same source word through the lexical number. In this case, each alternate gloss is distinguished by coding in the second half of the LEX. Two positions are used to designate terms belonging to microglossaries by subject area, two for glosses corresponding to different parts of speech, and two for context-sensitive glosses for polysemous words.The dictionaries contain two types of multi-word entries: substitution units (SU) and analysis units (AU). The key for a multi-word entry in the source dictionary is a string consisting of the first six digits of the LEX for each word in the unit. In both cases, the words must occur consecutively in the sentence in order for the unit to be activated.The basic SU contains from two to five words. This limit of five words was expanded to a maximum of 25 words through a process of nesting one or more such units in a long semantic unit (LSU) which is retrieved on a second pass through the phrase lookup module. When an SU or LSU is retrieved, the dictionary records corresponding to the individual words are replaced with one record corresponding to the entire sequence. The gloss for the unit is also found in a single entry in the target dictionary. An SU record has the same format as a single-word entry and may contain all the same codes. In addition, it may contain a character string which indicates the part of speech of each of its members.The analysis unit is limited to five words. The AU has several functions. At the very least, it alerts the analysis routines to the possible presence of a common phrase and provides information on its length and function. It can also be used to resolve the part of speech ambiguity of any of its members. Finally, it can specify an alternate translation for one or more of its parts. The AU is an entry in the source dictionary but has no counterpart in the target dictionary. The record for each source word is retained in the representation of the sentence, but the last two digits of its lexical number are modified if a translation other than the main gloss is desired. When the target lookup is performed, the gloss for each word is retrieved separately.At the time we began work on ENGSPAN, the SPANAM dictionaries were stored as ISAM files. They contained approximately 54,000 pairs of entries, including 13,000 single words and 3,000 phrases which had been hand-coded by the MT staff, 9,000 general vocabulary items, and 29,000 medical terms. We also had very user-friendly programs for updating and displaying the dictionaries. In order to take advantage of this considerable investment of time and money, it was decided to use the same record format and to write a program to reverse the dictionaries.Each dictionary was copied to tape, skipping the records for multi-word entries, inflected forms, auxiliary verbs, prepositions, and items coded as deprecated terms. The Spanish records were sorted into numerical order by LEX and the English records into alphabetical order by the lexical item. The new files were checked for duplicate keys. Whenever more than one record with the same LEX was encountered, the set of records was examined and reordered according to criteria based on the part of speech (verb, noun, adjective, other), reliability code (highest to lowest), and source code (PAHO term, medical term, general term). When the dictionaries were reloaded, the first record became the main entry for the word. The key of each subsequent record was made unique by concatenating an asterisk on the end of the word or adding 1 to the last digit of the LEX.When the reversed dictionaries were printed out in side-by-side format, multiple source and target entries were grouped together. The dictionary coder then reviewed these entries to determine whether the first entry in each set was the most appropriate entry for the ENGSPAN system and to identify those entries which should be treated as homographs. After the necessary adjustments were made, the extra entries on each side were deleted automatically. Figure 1 shows a page from the newly reversed dictionary, prior to any human intervention.The reversal program produced a total of 44,404 English source entries, including 4,725 duplicates. After the duplicates were removed, and new entries were made for the auxiliary verbs and prepositions, the dictionaries contained approximately 40,000 pairs. Although some glosses still need to be improved, most of the codes for part of speech, gender, and number are correct.The dictionary reversal provided us with a large source dictionary consisting mainly of uninflected English words. Our next task was to devise a lookup strategy which could find either the canonical form or an inflected form of a word. A lemmatization procedure (LEMMA), written by the late Dr. R. Ross Macdonald of Georgetown University, was adapted for use with the system.The dictionary lookup consists of a series of steps which are performed until a match is found for the input word. First, a high-frequency table is checked. Then the full-form is looked up in the main dictionary. If the word is not found, LEMMA is called. This procedure checks for the presence of a number of different endings, including -'s, -s', -s, -ly, -ed, -ing, -er, -est, and -n't. Each time an ending is removed, the new form of the word is looked up again. LEMMA makes use of morphological and spelling rules and short lists of exceptions in order to determine when to remove or add a final -e, whether the word ends in a double consonant, etc. If a lemmatized form of the word is found in the dictionary, its record is checked to make sure that its part of speech corresponds with the ending which was removed. If LEMMA exhausts all its possibilities, the word is checked against a small list of prefixes (re-, non-, un-, sub-, and pre-). If one of these prefixes can be removed, another lookup is performed. If this final lookup is unsuccessful, a dummy record is created for the word and a gap analysis routine is called. This routine uses the information provided by LEMMA and a table of other derivational suffixes in order to determine the possible parts of speech of a not-found word.This lookup strategy facilitates working with random text. It also helps to keep the dictionary smaller. The dictionary coder has the option of entering a word with all its affixes or entering something less than the full form. When dealing with irregular forms and homographs, the full form must be used. For example, the dictionary must contain "meet," "met," and "meeting," but the forms "meets" and "meetings" are not required. Although the word "unwittingly" could be found as "wit," it would be difficult to generate a satisfactory Spanish translation for the adverb based on the gloss for the noun. Thus the dictionary should contain both "unwitting" and "wit" but does not need to have an entry for "unwittingly."An original program (MTSCODE) was written to produce a KWIC concordance based on sequences of dictionary codes. It was devised as a tool for examining large portions of the English corpus and identifying the common syntactic patterns. Any document on the word processor can be used as the input text. The program uses the input and lookup procedures which were developed for the translation program. Therefore, it does not require full-form dictionary entries and can be run quite successfully on random text. By specifying different options at run time, the user can have the KWIC records sorted by left or right context; by dictionary codes, words, reversed words, or lexical numbers; and in alphabetical or reverse alphabetical order. Frequency counts and lists of words that are missing from the dictionary can also be obtained.MTSCODE has proved to be a valuable tool for monitoring the part-ofspeech and homograph coding in the newly reversed dictionaries. It is also helpful for studying the environments of various types of homographs. Since, the MTSCODE output is a display of the principal codes available to the analysis procedures, it is assisting us in formalizing our syntactic rules. Figure 2 is an example of one type of output produced by MTSCODE.The depth of coding inherited from the SPANAM dictionaries was not sufficient for the analysis of English. Indeed, the need for deeper coding has been one of the stumbling blocks to the further enhancement of the Spanish-English algorithm. As originally designed, the dictionary record consisted of 160 bytes, which were used to store information in character format in a total of 82 fixed fields. Many of these fields contained binary information-the presence or absence of a particular feature--signalled by the characters "0" (zero) and "1" (one). Many of the new codes to be introduced also lent themselves to a binary treatment. Instead of increasing the size of the record to accommodate the new codes, it was decided to use the existing space more efficiently by subdividing certain bytes into bit fields. A total of 18 bytes were converted to bit fields, which yielded 144 fields for binary codes.Some of the new bit fields are used to store information about the syntactic and semantic features of verbs, nouns, and adjectives. For example, verbs and deverbal nouns are specified as occurring with one or more of the following coda: no object, one object, two objects, complement, no passive, locative, marked infinitive, unmarked infinitive, declarative clause, imperative clause, interrogative clause, gerund, adjunct, bound preposition, and object followed by bound preposition. Subject and object preferences can be specified as ±Human, ±Animate, and ±Concrete. Noun features include count, bulk, concrete, human, animate, feminine, proper, collective, locative, time, body part, condition, and treatment. The need for additional noun features and the exact specifications of adjective features is being determined as work progresses on the translation algorithm. One of the references being used for the coding of English entries is Naomi Sager's description of the Linguistic String Parser (1981).The conversion to the new record format was accomplished by means of a special-purpose program which rearranged the existing fields and codes. The new codes are being introduced manually. Mnemonic descriptors were added to the dictionary update and display programs so that the dictionary coders do not have to work with binary representation. The PL/1 code is also quite easy to read, since each bit is referred to by a mnemonic identifier.Another modification of the coding system involved the part of speech codes, which were expanded to permit the subclassification of determiners, numeratives, adjectives, pronouns, modifiers, and conjunctions. The number of possible homograph types was also increased. Words are coded as homographs if they are expected to occur as more than one part of speech in the type of text for which the system is designed. Thus, while the number of homographs in the machine dictionaries is not limited to actual occurrences in the corpus, neither does it include all possible uses of every word.An attempt is being made to find the optimum degree of specificity in coding that will produce the desired quality of output without overburdening the algorithm or the dictionary coder. New codes are being introduced gradually as they are needed in order to obtain a correct translation. Additional fields can be created or the use of existing ones changed, as necessary.The first version of ENGSPAN was created by combining the existing input and output modules with the new source lookup procedure. Since we have been producing some type of Spanish output from the outset, we have been constantly reminded of the requirements for target synthesis. We will not fall into the trap of spending all our time trying to analyze English and have no Spanish to show for it. We are also able to get the reactions of native Spanish speakers whenever we have output that is presentable enough to show to them. Table 1 contains a list of the support software and other program modules originally developed for SPANAM which are also used for ENGSPAN. Table 2 lists the new program modules which were written for ENGSPAN during 1982 and 1983. Each new module has produced a noticeable improvement in the output, but many important areas remain to be addressed. We have already begun developing a general parsing algorithm and new types of dictionary entries for triggering context-sensitive glosses. Several different approaches are being considered for improving the treatment of prepositions and adjuncts. Special attention will be given to the synthesis of clitic pronouns, the use of the definite article, and the requirement for the subjunctive mood in Spanish. A long-range task is the development of knowledge structures and means of representing the semantic content of sentences and larger chunks of text. Some of ENGSPAN's new modules are described below. This module is a combined analysis and transfer routine which was written as a temporary procedure for handling the most frequent types of verb strings until a more general parsing algorithm could be developed. It identifies verb phrases in the source text, resolves homographs involving auxiliaries and main verbs, attempts to determine the subject of each finite verb, and introduces codes that will eventually trigger the synthesis of the proper Spanish inflections. It rearranges auxiliaries, adverbs, and "not"; deletes the pronoun "it" when it occurs as the subject; and deletes the auxiliary "do" when it occurs in questions. It triggers constructions using "haber" when the verb phrase is preceded by "there." English passives are rendered using "se" and the finite form of the verb unless the agent is expressed. The subjunctive mood and the imperfect tense are specified in certain contexts. There are several rules which select between "ser" and "estar." POSAMBIG This module attempts to determine the part of speech of words that are coded as homographs and have not already been resolved as verbs. It does so by examining the left and right context of each word. For each homograph type there is a default decision which is used when the context does not meet any of the criteria specified in the algorithm. Additional homograph types need to be added to this module, and some of the existing criteria need to be improved. The function of this module will eventually be performed by the parsing algorithm.A pattern matching procedure is used for the recognition of noun phrases. The parts of speech of the words are matched with a set of patterns which may begin with an adjective, adverbial modifier, or noun. The routine triggers the agreement of adjectives, determiners, and numeratives in premodifying position and the agreement of past participles in postmodifying position. It also specifies the word order within the target phrase. If a noun premodifier is moved to the right of the head noun, the preposition "de" is inserted. The definite article is inserted before some types of noun phrases if there is no determiner or numerative. A total of 19 noun phrase patterns are currently being tested. The results are being compared with the desired translation of the noun phrases found in the corpus in order to determine the additional types of coding and analysis which are needed.The procedure for the synthesis of Spanish verb forms is based on principles of generative morphology and phonology. The program synthesizes regular and irregular verbs, in all tenses and moods except the future subjunctive, and in all persons except the second person plural. The verb is entered in the target dictionary in its stem form. Binary codes are used to specify the conjugation class and 11 exception features which govern the synthesis of irregular forms. Only one dictionary entry is needed for each verb. A small number of highly irregular stems and endings are listed in the program itself. The majority of verbs require no synthesis coding except for the conjugation class. The procedure consists of a series of morphological spellout rules; raising, lowering, diphthongization, and deletion rules based on phonological processes; stress assignment rules, and orthographic rules to handle predictable spelling changes.This procedure performs the synthesis of feminine and plural endings for determiners, numeratives, adjectives, and nouns. The algorithm contains rules for forming all regular plurals and handling many irregular forms. The majority of Spanish nouns and adjectives require no special synthesis coding in the dictionary entry. If the gloss consists of more than one word, synthesis will be performed on the first word in the default situation. The item may be coded for synthesis of every word or only specific words.The analysis procedures described above are based entirely on the recognition of local syntactic patterns. They break down whenever long distance relationships are involved. From the beginning of the project we knew that we would have to expand the horizons of our analysis routines. The main thrust of our current work is the development of an augmented transition network (ATN) parser, similar to the one described by Winograd (1983) . The ATN was selected because it is compatible with our existing architecture, which has a strong syntactic orientation. It provides an effective means of dealing with homographs and allows for the selective use of semantic coding. ATN parser is being designed to provide us with the information we need for Spanish synthesis. At present, we are working only at the sentence level. Eventually, we plan to save certain types of information about previous sentences.The current version of the parser has four networks: sentence, noun phrase, verb phrase, and prepositional phrase. It also has a special procedure for handling conjoining within the phrase. Each network consists of a set of states connected by arcs. Four types of arcs are used: category arcs, which can be taken if the part of speech matches that of the input word; jump arcs, which can be taken without matching a word of the input; seek arcs, which indicate recursive calls to a network; and send arcs, which indicate successful completion of processing in a network.An augmented transition network allows conditions and actions to be associated with the arcs. If there is a condition on an arc, it must be satisfied before the arc can be taken. If an action is specified, it is performed whenever the arc is taken. The use of conditions provides a mechanism for introducing into the grammar a degree of sensitivity to the left context and to semantic criteria. The actions are used to store the intermediate and final results of the analysis in registers which are available both to the parser and to the synthesis routines.The algorithm performs a sequential parse with chronological backtracking. The order in which the arcs are tested is specified by the linguist, and the parser stops after completing the first successful parse. The algorithm processes the words of the input string one at a time, moving from left to right. All possible arcs that may be taken for a word at the current state are placed on a pushdown stack. The parser tests each arc on the stack until it finds one that matches the current word. It continues through the input string as long as it can find an arc which it is allowed to take. If no arc is found for the current word, the parser backtracks and tests the alternative arcs which were saved on the stack. If the end of the string is reached and the algorithm is at a final state in the network, the parse is successful. If no path can be found through the network, the parse fails.In the event of an unsuccessful parse, ENGSPAN is still expected to produce some kind of a translation. We are experimenting with several strategies for recovering information from a failed parse. For example, whenever backtracking takes place, information regarding the longest successful path is saved. It may be possible to resume the parse at another point in the input string. We are also investigating ways of making the parser more efficient, such as saving well-formed substrings and doing explicit rather than chronological backtracking.The ATN parsing algorithm is being developed in an independent PL/1 program, using the ENGSPAN input and dictionary lookup modules. The network is read in at runtime, making it possible to experiment with different network configurations without recompiling the program. The next step will be to link the two programs so that ENGSPAN's synthesis modules can access the sentence, clause, and phrase registers created by the parser. If the parse is not successful, ENGSPAN's local disambiguation and analysis routines will be used to fill in as much missing information as possible in order to obtain a default translation. The diagram in Figure 3 shows how the ENGSPAN model will look when the parser has been incorporated.The strategy regarding the use of multi-word dictionary entries is under review in light of the requirements of the ATN parser and the analysis of conjoined phrases. There is a need to change the way the substitution unit is used and to design several new types of dictionary entries.The substitution unit should not be used if the parser needs to access the syntactic and semantic codes for each word. This is the case whenever there is a relatively high probability that the phrase may be part of a conjoined structure. For example, the phrase "tertiary care" can be expected Figure 3 . The ENGSPAN model. to occur as "primary, secondary, and tertiary care." It is also necessary when the same sequence of lexical items can occur with different functions, such as "drug control" and "the use of this drug controls the symptoms." If the parser is to do its job, the number of phrases which can be handled as SUs turns out to be relatively small. These include phrasal prepositions such as "in lieu of," expressions such as "by leaps and bounds," the names of organizations, meetings, and documents, and the names of chemical substances. Many sequences which were formally entered as SUs can be better handled as analysis units.With the reduced use of the SU, the nesting of SUs in order to handle sequences of more than 5 words is no longer feasible, and a new method of handling long units is needed. It is planned to use a variable-length record in the same dictionary. Procedures must be developed to make it as easy as possible for the dictionary coder to add, change, and delete the new type of entry. The implementation of this change will require modifications in the ENGSPAN, DPRINT, and UPDATE programs.Another type of dictionary entry is being developed to handle lexical items such as phrasal verbs which are likely to occur as noncontiguous words in the input. This type of entry will be used when it may be necessary to replace the individual source dictionary records with another record containing the syntactic and semantic features of the multi-word lexical item. The entry will be retrieved from the dictionary during the parsing of the sentence; the parser will determine whether or not the individual records should be replaced by the multi-word entry.Still another type of dictionary entry is being developed to specify an alternate translation of a word which depends on the occurrence of a specific word or set of features in one of its arguments. This entry will be used by a transfer procedure which is called after the parse has been completed. The procedure will access the structural information produced by the parser in order to locate the argument in question. If the argument meets the conditions specified in the transfer entry, the alternate translation will be selected. Figure 4 contains a page of unedited English-Spanish machine translation produced by ENGSPAN in January 1984. The output is in word-processing format. This sample is provided for the purpose of demonstrating that ENGSPAN is working, but that there is still a lot more work to do. Figure 5 shows the dictionary entries for some of the words in the sample text. We have also included, as Figure 6 , the raw output which was obtained for the same page of text before any dictionary updating had been done. It is presented in the working format produced on the computer printer. It provides an indication of the results that could be expected for random input text at this time.We plan to have the new version of ENGSPAN ready for pilot production by the end of 1984. The output will probably require a substantial amount of postediting, but we expect to be able to show a cost advantage over manual translation.
null
Main paper: : As an organization, PAHO is involved in three different aspects of MT. It is the software developer, the user of the system, and the end-user of the translation. The system developers (linguists and programmers) and the system users (posteditors and dictionary coders) are members of the same team. In fact, everyone on the project staff has some experience in postediting, dictionary coding, and programming. This working environment makes the development staff keenly aware of the needs and desires of those using the system-both from personal experience and from listening to daily feedback. In turn, the posteditor has an understanding of how the algorithm works and can appreciate the relative complexity of the problems encountered.While the developers are mainly concerned with the linguistic content of the programs, the operational environment is also kept in mind. A recent case in point involved the format in which the side-by-side output was received on the word processor. Before postediting could begin, a time-consuming glossary had to be run in order to remove the source text, unwanted format lines, spaces, and hard carriage returns. This problem was solved by expanding the output module to create a second file containing only the target translation with the necessary Wang control characters and format lines and no unwanted carriage returns. The same translation run can now produce both a target-only document on the word processor and a side-by-side document either on the Wang or the IBM (terminal and/or printer). An extra step in the production cycle was eliminated and the turnaround time improved.ENGSPAN is being designed to produce Spanish translations of English texts. It is language-pair specific, but not subject-area specific. The input will not be restricted to any particular sublanguage or discipline, nor can it require pre-editing or the use of restricted syntax. The algorithm is being designed with expository text (both technical and general) in mind, but provisions will also be made for other types of text whenever possible.Our goal is to produce high-quality raw output which requires only a limited amount of postediting to produce a finished translation. While the quality of the raw output is our main concern, ease of operation is also an important consideration. Dictionary updating should be mnemonic and the user should be required to supply only those codes which cannot be computed from other information already available to the system. The procedures for submitting translations, dictionary updates, dictionary backups, etc. should also be simple. Finally, the system should be efficient in its use of storage space and processing time. When we reach a satisfactory level of quality, ease of operation, and efficiency, we plan to adapt the system to run on a microcomputer. This will make low-cost machine translation available to the PAHO Country Offices and Pan American Centers and to other cooperating institutions in the Member Countries.An important part of our development strategy is the use of an experimental corpus. The corpus contains over 50,000 running words, taken from texts by different authors and dealing with a variety of health-related topics. It is large enough that it contains examples of a wide range of syntactic and semantic phenomena, yet at the same time it provides us with objective data on the relative frequency of occurrence of different types of constructions. We intend to concentrate our efforts on the types of syntax found most frequently in the corpus.The system uses separate files for the source and target dictionaries. The records in both files have a fixed length of 160 bytes. The source entry is linked to its target gloss by means of a 12-digit lexical number (LEX). The first six digits of the LEX are the unique identification number which is assigned to each pair when it is added to the dictionary. The second half of the LEX is used to specify alternate target glosses associated with the same source entry. The main or default target gloss for each pair has zeroes in these positions.The key for a source entry is the lexical item itself, which may be up to 30 characters in length. The source dictionary is arranged alphabetically. The key for a target entry is the LEX, and the target dictionary is arranged in numerical order.Words may be entered in the source dictionary either with or without inflectional endings. Host nouns are entered only in the singular and adjectives only in the masculine singular. Verbs are entered as stems. Full-form entries are required for words with highly irregular morphology and for homographs (words which can function as more than one part of speech).Several source items may be linked to the same target gloss by assigning it the same LEX. For example, irregular forms of the same verb or alternate spellings of a word require only one entry in the target dictionary. Likewise, more than one target gloss can be linked to the same source word through the lexical number. In this case, each alternate gloss is distinguished by coding in the second half of the LEX. Two positions are used to designate terms belonging to microglossaries by subject area, two for glosses corresponding to different parts of speech, and two for context-sensitive glosses for polysemous words.The dictionaries contain two types of multi-word entries: substitution units (SU) and analysis units (AU). The key for a multi-word entry in the source dictionary is a string consisting of the first six digits of the LEX for each word in the unit. In both cases, the words must occur consecutively in the sentence in order for the unit to be activated.The basic SU contains from two to five words. This limit of five words was expanded to a maximum of 25 words through a process of nesting one or more such units in a long semantic unit (LSU) which is retrieved on a second pass through the phrase lookup module. When an SU or LSU is retrieved, the dictionary records corresponding to the individual words are replaced with one record corresponding to the entire sequence. The gloss for the unit is also found in a single entry in the target dictionary. An SU record has the same format as a single-word entry and may contain all the same codes. In addition, it may contain a character string which indicates the part of speech of each of its members.The analysis unit is limited to five words. The AU has several functions. At the very least, it alerts the analysis routines to the possible presence of a common phrase and provides information on its length and function. It can also be used to resolve the part of speech ambiguity of any of its members. Finally, it can specify an alternate translation for one or more of its parts. The AU is an entry in the source dictionary but has no counterpart in the target dictionary. The record for each source word is retained in the representation of the sentence, but the last two digits of its lexical number are modified if a translation other than the main gloss is desired. When the target lookup is performed, the gloss for each word is retrieved separately.At the time we began work on ENGSPAN, the SPANAM dictionaries were stored as ISAM files. They contained approximately 54,000 pairs of entries, including 13,000 single words and 3,000 phrases which had been hand-coded by the MT staff, 9,000 general vocabulary items, and 29,000 medical terms. We also had very user-friendly programs for updating and displaying the dictionaries. In order to take advantage of this considerable investment of time and money, it was decided to use the same record format and to write a program to reverse the dictionaries.Each dictionary was copied to tape, skipping the records for multi-word entries, inflected forms, auxiliary verbs, prepositions, and items coded as deprecated terms. The Spanish records were sorted into numerical order by LEX and the English records into alphabetical order by the lexical item. The new files were checked for duplicate keys. Whenever more than one record with the same LEX was encountered, the set of records was examined and reordered according to criteria based on the part of speech (verb, noun, adjective, other), reliability code (highest to lowest), and source code (PAHO term, medical term, general term). When the dictionaries were reloaded, the first record became the main entry for the word. The key of each subsequent record was made unique by concatenating an asterisk on the end of the word or adding 1 to the last digit of the LEX.When the reversed dictionaries were printed out in side-by-side format, multiple source and target entries were grouped together. The dictionary coder then reviewed these entries to determine whether the first entry in each set was the most appropriate entry for the ENGSPAN system and to identify those entries which should be treated as homographs. After the necessary adjustments were made, the extra entries on each side were deleted automatically. Figure 1 shows a page from the newly reversed dictionary, prior to any human intervention.The reversal program produced a total of 44,404 English source entries, including 4,725 duplicates. After the duplicates were removed, and new entries were made for the auxiliary verbs and prepositions, the dictionaries contained approximately 40,000 pairs. Although some glosses still need to be improved, most of the codes for part of speech, gender, and number are correct.The dictionary reversal provided us with a large source dictionary consisting mainly of uninflected English words. Our next task was to devise a lookup strategy which could find either the canonical form or an inflected form of a word. A lemmatization procedure (LEMMA), written by the late Dr. R. Ross Macdonald of Georgetown University, was adapted for use with the system.The dictionary lookup consists of a series of steps which are performed until a match is found for the input word. First, a high-frequency table is checked. Then the full-form is looked up in the main dictionary. If the word is not found, LEMMA is called. This procedure checks for the presence of a number of different endings, including -'s, -s', -s, -ly, -ed, -ing, -er, -est, and -n't. Each time an ending is removed, the new form of the word is looked up again. LEMMA makes use of morphological and spelling rules and short lists of exceptions in order to determine when to remove or add a final -e, whether the word ends in a double consonant, etc. If a lemmatized form of the word is found in the dictionary, its record is checked to make sure that its part of speech corresponds with the ending which was removed. If LEMMA exhausts all its possibilities, the word is checked against a small list of prefixes (re-, non-, un-, sub-, and pre-). If one of these prefixes can be removed, another lookup is performed. If this final lookup is unsuccessful, a dummy record is created for the word and a gap analysis routine is called. This routine uses the information provided by LEMMA and a table of other derivational suffixes in order to determine the possible parts of speech of a not-found word.This lookup strategy facilitates working with random text. It also helps to keep the dictionary smaller. The dictionary coder has the option of entering a word with all its affixes or entering something less than the full form. When dealing with irregular forms and homographs, the full form must be used. For example, the dictionary must contain "meet," "met," and "meeting," but the forms "meets" and "meetings" are not required. Although the word "unwittingly" could be found as "wit," it would be difficult to generate a satisfactory Spanish translation for the adverb based on the gloss for the noun. Thus the dictionary should contain both "unwitting" and "wit" but does not need to have an entry for "unwittingly."An original program (MTSCODE) was written to produce a KWIC concordance based on sequences of dictionary codes. It was devised as a tool for examining large portions of the English corpus and identifying the common syntactic patterns. Any document on the word processor can be used as the input text. The program uses the input and lookup procedures which were developed for the translation program. Therefore, it does not require full-form dictionary entries and can be run quite successfully on random text. By specifying different options at run time, the user can have the KWIC records sorted by left or right context; by dictionary codes, words, reversed words, or lexical numbers; and in alphabetical or reverse alphabetical order. Frequency counts and lists of words that are missing from the dictionary can also be obtained.MTSCODE has proved to be a valuable tool for monitoring the part-ofspeech and homograph coding in the newly reversed dictionaries. It is also helpful for studying the environments of various types of homographs. Since, the MTSCODE output is a display of the principal codes available to the analysis procedures, it is assisting us in formalizing our syntactic rules. Figure 2 is an example of one type of output produced by MTSCODE.The depth of coding inherited from the SPANAM dictionaries was not sufficient for the analysis of English. Indeed, the need for deeper coding has been one of the stumbling blocks to the further enhancement of the Spanish-English algorithm. As originally designed, the dictionary record consisted of 160 bytes, which were used to store information in character format in a total of 82 fixed fields. Many of these fields contained binary information-the presence or absence of a particular feature--signalled by the characters "0" (zero) and "1" (one). Many of the new codes to be introduced also lent themselves to a binary treatment. Instead of increasing the size of the record to accommodate the new codes, it was decided to use the existing space more efficiently by subdividing certain bytes into bit fields. A total of 18 bytes were converted to bit fields, which yielded 144 fields for binary codes.Some of the new bit fields are used to store information about the syntactic and semantic features of verbs, nouns, and adjectives. For example, verbs and deverbal nouns are specified as occurring with one or more of the following coda: no object, one object, two objects, complement, no passive, locative, marked infinitive, unmarked infinitive, declarative clause, imperative clause, interrogative clause, gerund, adjunct, bound preposition, and object followed by bound preposition. Subject and object preferences can be specified as ±Human, ±Animate, and ±Concrete. Noun features include count, bulk, concrete, human, animate, feminine, proper, collective, locative, time, body part, condition, and treatment. The need for additional noun features and the exact specifications of adjective features is being determined as work progresses on the translation algorithm. One of the references being used for the coding of English entries is Naomi Sager's description of the Linguistic String Parser (1981).The conversion to the new record format was accomplished by means of a special-purpose program which rearranged the existing fields and codes. The new codes are being introduced manually. Mnemonic descriptors were added to the dictionary update and display programs so that the dictionary coders do not have to work with binary representation. The PL/1 code is also quite easy to read, since each bit is referred to by a mnemonic identifier.Another modification of the coding system involved the part of speech codes, which were expanded to permit the subclassification of determiners, numeratives, adjectives, pronouns, modifiers, and conjunctions. The number of possible homograph types was also increased. Words are coded as homographs if they are expected to occur as more than one part of speech in the type of text for which the system is designed. Thus, while the number of homographs in the machine dictionaries is not limited to actual occurrences in the corpus, neither does it include all possible uses of every word.An attempt is being made to find the optimum degree of specificity in coding that will produce the desired quality of output without overburdening the algorithm or the dictionary coder. New codes are being introduced gradually as they are needed in order to obtain a correct translation. Additional fields can be created or the use of existing ones changed, as necessary.The first version of ENGSPAN was created by combining the existing input and output modules with the new source lookup procedure. Since we have been producing some type of Spanish output from the outset, we have been constantly reminded of the requirements for target synthesis. We will not fall into the trap of spending all our time trying to analyze English and have no Spanish to show for it. We are also able to get the reactions of native Spanish speakers whenever we have output that is presentable enough to show to them. Table 1 contains a list of the support software and other program modules originally developed for SPANAM which are also used for ENGSPAN. Table 2 lists the new program modules which were written for ENGSPAN during 1982 and 1983. Each new module has produced a noticeable improvement in the output, but many important areas remain to be addressed. We have already begun developing a general parsing algorithm and new types of dictionary entries for triggering context-sensitive glosses. Several different approaches are being considered for improving the treatment of prepositions and adjuncts. Special attention will be given to the synthesis of clitic pronouns, the use of the definite article, and the requirement for the subjunctive mood in Spanish. A long-range task is the development of knowledge structures and means of representing the semantic content of sentences and larger chunks of text. Some of ENGSPAN's new modules are described below. This module is a combined analysis and transfer routine which was written as a temporary procedure for handling the most frequent types of verb strings until a more general parsing algorithm could be developed. It identifies verb phrases in the source text, resolves homographs involving auxiliaries and main verbs, attempts to determine the subject of each finite verb, and introduces codes that will eventually trigger the synthesis of the proper Spanish inflections. It rearranges auxiliaries, adverbs, and "not"; deletes the pronoun "it" when it occurs as the subject; and deletes the auxiliary "do" when it occurs in questions. It triggers constructions using "haber" when the verb phrase is preceded by "there." English passives are rendered using "se" and the finite form of the verb unless the agent is expressed. The subjunctive mood and the imperfect tense are specified in certain contexts. There are several rules which select between "ser" and "estar." POSAMBIG This module attempts to determine the part of speech of words that are coded as homographs and have not already been resolved as verbs. It does so by examining the left and right context of each word. For each homograph type there is a default decision which is used when the context does not meet any of the criteria specified in the algorithm. Additional homograph types need to be added to this module, and some of the existing criteria need to be improved. The function of this module will eventually be performed by the parsing algorithm.A pattern matching procedure is used for the recognition of noun phrases. The parts of speech of the words are matched with a set of patterns which may begin with an adjective, adverbial modifier, or noun. The routine triggers the agreement of adjectives, determiners, and numeratives in premodifying position and the agreement of past participles in postmodifying position. It also specifies the word order within the target phrase. If a noun premodifier is moved to the right of the head noun, the preposition "de" is inserted. The definite article is inserted before some types of noun phrases if there is no determiner or numerative. A total of 19 noun phrase patterns are currently being tested. The results are being compared with the desired translation of the noun phrases found in the corpus in order to determine the additional types of coding and analysis which are needed.The procedure for the synthesis of Spanish verb forms is based on principles of generative morphology and phonology. The program synthesizes regular and irregular verbs, in all tenses and moods except the future subjunctive, and in all persons except the second person plural. The verb is entered in the target dictionary in its stem form. Binary codes are used to specify the conjugation class and 11 exception features which govern the synthesis of irregular forms. Only one dictionary entry is needed for each verb. A small number of highly irregular stems and endings are listed in the program itself. The majority of verbs require no synthesis coding except for the conjugation class. The procedure consists of a series of morphological spellout rules; raising, lowering, diphthongization, and deletion rules based on phonological processes; stress assignment rules, and orthographic rules to handle predictable spelling changes.This procedure performs the synthesis of feminine and plural endings for determiners, numeratives, adjectives, and nouns. The algorithm contains rules for forming all regular plurals and handling many irregular forms. The majority of Spanish nouns and adjectives require no special synthesis coding in the dictionary entry. If the gloss consists of more than one word, synthesis will be performed on the first word in the default situation. The item may be coded for synthesis of every word or only specific words.The analysis procedures described above are based entirely on the recognition of local syntactic patterns. They break down whenever long distance relationships are involved. From the beginning of the project we knew that we would have to expand the horizons of our analysis routines. The main thrust of our current work is the development of an augmented transition network (ATN) parser, similar to the one described by Winograd (1983) . The ATN was selected because it is compatible with our existing architecture, which has a strong syntactic orientation. It provides an effective means of dealing with homographs and allows for the selective use of semantic coding. ATN parser is being designed to provide us with the information we need for Spanish synthesis. At present, we are working only at the sentence level. Eventually, we plan to save certain types of information about previous sentences.The current version of the parser has four networks: sentence, noun phrase, verb phrase, and prepositional phrase. It also has a special procedure for handling conjoining within the phrase. Each network consists of a set of states connected by arcs. Four types of arcs are used: category arcs, which can be taken if the part of speech matches that of the input word; jump arcs, which can be taken without matching a word of the input; seek arcs, which indicate recursive calls to a network; and send arcs, which indicate successful completion of processing in a network.An augmented transition network allows conditions and actions to be associated with the arcs. If there is a condition on an arc, it must be satisfied before the arc can be taken. If an action is specified, it is performed whenever the arc is taken. The use of conditions provides a mechanism for introducing into the grammar a degree of sensitivity to the left context and to semantic criteria. The actions are used to store the intermediate and final results of the analysis in registers which are available both to the parser and to the synthesis routines.The algorithm performs a sequential parse with chronological backtracking. The order in which the arcs are tested is specified by the linguist, and the parser stops after completing the first successful parse. The algorithm processes the words of the input string one at a time, moving from left to right. All possible arcs that may be taken for a word at the current state are placed on a pushdown stack. The parser tests each arc on the stack until it finds one that matches the current word. It continues through the input string as long as it can find an arc which it is allowed to take. If no arc is found for the current word, the parser backtracks and tests the alternative arcs which were saved on the stack. If the end of the string is reached and the algorithm is at a final state in the network, the parse is successful. If no path can be found through the network, the parse fails.In the event of an unsuccessful parse, ENGSPAN is still expected to produce some kind of a translation. We are experimenting with several strategies for recovering information from a failed parse. For example, whenever backtracking takes place, information regarding the longest successful path is saved. It may be possible to resume the parse at another point in the input string. We are also investigating ways of making the parser more efficient, such as saving well-formed substrings and doing explicit rather than chronological backtracking.The ATN parsing algorithm is being developed in an independent PL/1 program, using the ENGSPAN input and dictionary lookup modules. The network is read in at runtime, making it possible to experiment with different network configurations without recompiling the program. The next step will be to link the two programs so that ENGSPAN's synthesis modules can access the sentence, clause, and phrase registers created by the parser. If the parse is not successful, ENGSPAN's local disambiguation and analysis routines will be used to fill in as much missing information as possible in order to obtain a default translation. The diagram in Figure 3 shows how the ENGSPAN model will look when the parser has been incorporated.The strategy regarding the use of multi-word dictionary entries is under review in light of the requirements of the ATN parser and the analysis of conjoined phrases. There is a need to change the way the substitution unit is used and to design several new types of dictionary entries.The substitution unit should not be used if the parser needs to access the syntactic and semantic codes for each word. This is the case whenever there is a relatively high probability that the phrase may be part of a conjoined structure. For example, the phrase "tertiary care" can be expected Figure 3 . The ENGSPAN model. to occur as "primary, secondary, and tertiary care." It is also necessary when the same sequence of lexical items can occur with different functions, such as "drug control" and "the use of this drug controls the symptoms." If the parser is to do its job, the number of phrases which can be handled as SUs turns out to be relatively small. These include phrasal prepositions such as "in lieu of," expressions such as "by leaps and bounds," the names of organizations, meetings, and documents, and the names of chemical substances. Many sequences which were formally entered as SUs can be better handled as analysis units.With the reduced use of the SU, the nesting of SUs in order to handle sequences of more than 5 words is no longer feasible, and a new method of handling long units is needed. It is planned to use a variable-length record in the same dictionary. Procedures must be developed to make it as easy as possible for the dictionary coder to add, change, and delete the new type of entry. The implementation of this change will require modifications in the ENGSPAN, DPRINT, and UPDATE programs.Another type of dictionary entry is being developed to handle lexical items such as phrasal verbs which are likely to occur as noncontiguous words in the input. This type of entry will be used when it may be necessary to replace the individual source dictionary records with another record containing the syntactic and semantic features of the multi-word lexical item. The entry will be retrieved from the dictionary during the parsing of the sentence; the parser will determine whether or not the individual records should be replaced by the multi-word entry.Still another type of dictionary entry is being developed to specify an alternate translation of a word which depends on the occurrence of a specific word or set of features in one of its arguments. This entry will be used by a transfer procedure which is called after the parse has been completed. The procedure will access the structural information produced by the parser in order to locate the argument in question. If the argument meets the conditions specified in the transfer entry, the alternate translation will be selected. Figure 4 contains a page of unedited English-Spanish machine translation produced by ENGSPAN in January 1984. The output is in word-processing format. This sample is provided for the purpose of demonstrating that ENGSPAN is working, but that there is still a lot more work to do. Figure 5 shows the dictionary entries for some of the words in the sample text. We have also included, as Figure 6 , the raw output which was obtained for the same page of text before any dictionary updating had been done. It is presented in the working format produced on the computer printer. It provides an indication of the results that could be expected for random input text at this time.We plan to have the new version of ENGSPAN ready for pilot production by the end of 1984. The output will probably require a substantial amount of postediting, but we expect to be able to show a cost advantage over manual translation. Appendix:
null
null
null
null
{ "paperhash": [ "bates|language_as_a_cognitive_process", "vasconcellos|management_of_the_machine_translation_environment:_interaction_of_functions_at_the_pan_american_health_organization" ], "title": [ "Language as a Cognitive Process", "Management of the machine translation environment: interaction of functions at the Pan American Health Organization" ], "abstract": [ "Books reviewed in the AJCL will be those of interest to computat ional linguists; books in closely related disciplines may also be considered. The purpose of a book review is to inform readers about the content of the book and to present opinions on the choice of material, manner of presentat ion, and suitability for various readers and purposes. There is no limit to the length of reviews. The appropriate length is determined by its content. If you wish to review a specific book, please contact me before doing so to check that it is not already under review by someone else. If you want to be on a list of potential reviewers, please send me your name and mailing address together with a list of keywords summarizing your areas of interest. You can also suggest books to be reviewed without volunteering to be the reviewer.", "Spanish-English machine translation at the Pan American Health Organization (WHO regional office) has been fully operational since early 1980. The environment supports, at the same time: production, terminology retrieval, dictionary and program maintenance, and advanced development of a new system from English into Spanish. The interaction of these activities strengthens all of them mutually." ], "authors": [ { "name": [ "Lyn Bates", "T. Winograd" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Muriel Vasconcellos" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "2209224", "237295825" ], "intents": [ [], [] ], "isInfluential": [ false, false ] }
Problem: The paper describes the approach used for the design and implementation of the English-Spanish machine translation system (ENGSPAN) by PAHO. Solution: The hypothesis of the paper is that by developing ENGSPAN, a high-quality machine translation system for Spanish translations of English texts can be achieved, requiring minimal postediting and efficient operation, ultimately leading to cost advantages over manual translation.
492
0.004065
null
null
null
null
null
null
null
null
da7797c097f7d4d922f22c0a1a23927d22377c67
237558755
null
Application of {SYSTRAN} for translation of nuclear technology texts at the Nuclear Center of Karlsruhe
Four years ago the Nuclear Center of Karlsruhe has commenced to apply the Systran MT Program for the translation of nuclear technology texts from French into English. During this period the Systran program has been updated several times and about 8000 entries have been made in the stem dictionary to adapt the MT program to the special field. This resulted in substantial improvement of the quality of translated texts. Quantitative judgement of this quality could be achieved by repeated statistical analysis of some representative sample texts. The results of these analyses are shown and commented.
{ "name": [ "Habermann, F. W. A." ], "affiliation": [ null ] }
null
null
Proceedings of the International Conference on Methodology and Techniques of Machine Translation: Processing from words to language
1984-02-01
0
0
null
null
null
null
null
null
Main paper: Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
492
0
null
null
null
null
null
null
null
null
9178bed01c6d85186e87295d935449cb656c5924
28373620
null
A new dictionary structure for bi-directional {MT} system
The importance and structure of MT-dictionary were discussed extensively by many researchers in machine, translation in the past. These structures were mainly concerned with MT-dictionaries for one-way translation systems. In the present paper, a new dictionary structure for bi-directional machine translation is being introduced. The new structure is being tested for Chinese-English as well as English-Chinese machine translation.
{ "name": [ "Loh, Shiu-chang and", "Kong, Luan and", "Hung, Hing-sum" ], "affiliation": [ null, null, null ] }
null
null
Proceedings of the International Conference on Methodology and Techniques of Machine Translation: Processing from words to language
1984-02-01
9
0
null
The importance and structure of MT-dictionary were discussed extensively by many researchers in machine translation in the past (Knowles 1982 , Lamb and Jacobsen 1966 , Liu 1982 , Loh 1975 , Oettinger 1960 , Wang 1982 , Wang, T'sou and Chan 1971 . These structures were mainly concerned with MT-dictionaries for one-way translation systems. Dictionary structures for bi-directional or multi-language machine translation systems were rarely discussed. The aim of this paper is to introduce a dictionary structure suitable for multi-language translation system. The dictionary structure was designed in conjunction with the Dual Language Translator (DLT) developed at the Chinese University of Hong Kong in 1978 (Loh, Hung and Kong 1978) .
null
It is generally agreed that for translation from one language L1 into another language L2, the MT-dictionary D 12 must contain the following informations:D 12 = { IC L1 , GI L1 , IC L2 , GI L2 } where IC L1is a set of internal codings of the source lexical items in L1,is a set of grammatical informations of these items,is a set of target equivalences (the target lexical items in L2) of these items, GI L2 is a set of grammatical informations of these target equivalences.Similarly, for translation from language L2 into language L1, the MT-dictionary D 21 must also contain these types of informations.For a one-way translation system, this kind of dictionary structure may seem to be quite suitable. However, for a bi-directional language translation system, this kind of dictionary structure requires almost identical information to be kept in two different storages which is redundant and undesirable. A new structure for the dictionary is thus required. Bearing these points in mind, we proposed a structure for the MT-dictionary of a multi-language translation system.The basic organization of the proposed MT-dictionary is illustrated in Fig. 2 .1. The two main components of the dictionary are the DICTIONARY ADMINSTRATOR and the n SUB-DICTIONARIES. Each of the n SUB-DICTIONARIES contains the information on the lexical items of one of the n languages concerned.Each entry of items of a SUB-DICTIONARY specifies the codings of a lexical item, its grammatical information and (n-1) pointers which point to the entries of the other (n-1) SUB-DICTIONARIES where the target equivalences of the lexical item can be found respectively (Fig. 2. 2). An implementation of this dictionary structure is the dictionary of the Dual Language Translator (DLT) for the translation between Chinese and English. This dictionary consists of a DICTIONARY ADMINISTRATOR, a Chinese SUB-DICTIONARY and an English SUB-DICTIONARY ( Fig.2.3 ). The actual organization of a SUB-DICTIONARY will be discussed in the following section. Basically, there are three main types of information in a SUB-DICTIONARY, namely, CONTROL INFORMATION, SYNTACTIC/ SEMANTIC ITEMS and a COMMON DATA-POOL ( Fig. 3.1 ).The CONTROL INFORMATION specifies the identification of the The reason for separating these items from the others is to speed up the translation process. During the translation process, these items will be kept in computer main memory.Thus dictionary consultation for these items will not be necessary, consequently reduce the times for lexical analysis. Due to the limitation of the size of computer main memory, the number of these special items is limited. Lexical information records are used for lexical analysis.A traditional method for representing lexical information is by means of a linear list such as illustrated in Fig. 3 .2.The disadvantages of this method are that duplication of lexical information exists and the search for a lexical item may have to be carried out linearly. We can rewrite the same list in Fig. 3 .2 into a tree (Fig. 3.3) , and a linked list representation of such a tree is given in Fig. 3 .4. It is by linked list that the lexical information in the SUB-DICTIONARIES are represented. The format of the lexical information records is illustrated in Fig. 3 .5. The format of this type of records is illustrated in Fig. 3 .6.The associated information records are used for determining the particular properties of the items, such as special article or measure word required, etc. and their format is illustrated in Fig. 3 .7.The COMMON DATA-POOL is a set of data which will be used by all items or a subset of items. For example, articles, measure words, prefix and postfix etc. Fig. 3 .8 illustrates the record format of COMMON DATA-POOL. Consider the Chinese lexical item " (4282 4496)".(1). This item can be noun or a verb.(2). If it is a noun then it has one meaning, and can be assigned the semantic category NA (non-animate).For this particular meaning, the item has an associated information which specifies that it requires the particular measure word " (7309)".(3). If it is a verb then it has one meaning, and can be assigned the semantic category HA (humanized action).For this particular meaning, the item does not have an associated information.According to the above specification, the lexical information record, grammatical and target information record and the associated information record of the item will be as shown in Fig. 4.1(a) .The COMMON DATA-POOL of the Chinese SUB-DICTIONARY may be a set of Chinese characters which might be necessary to be inserted into the Chinese sentence.For example, the Chinese character " (0001)", " (0020)", measure word " (7309)" etc.Example 2.Consider the English lexical item "STUDY".(1). This item can be a noun or a verb.(2). If it is a noun then it has one meaning, and can be assigned the semantic category NA (non-animate).For this particular meaning, the item has an associated information which specifies that the plural form of the item is "STUD + IES".(3). If it is a verb then it has one meaning, and can be assigned the semantic category HA (humanized action).For this particular meaning, the item has a, set of associated information which specify how the various form of the item can be constructed.According to the above specification, the lexical information record, grammatical and target information record and the associated information record of the item will be as shown in Fig 4.1(b) .The COMMON DATA-POOL of the English SUB-DICTIONARY may be a set of characters which might be necessary to be inserted into the English words or sentence.For example, " S, ISS, D, ED, ING " etc. that is, an item in one SUB-DICTIONARY may have one and more than one equivalences in another SUB-DICTIONARY.
null
null
Main paper: basic structure: It is generally agreed that for translation from one language L1 into another language L2, the MT-dictionary D 12 must contain the following informations:D 12 = { IC L1 , GI L1 , IC L2 , GI L2 } where IC L1is a set of internal codings of the source lexical items in L1,is a set of grammatical informations of these items,is a set of target equivalences (the target lexical items in L2) of these items, GI L2 is a set of grammatical informations of these target equivalences.Similarly, for translation from language L2 into language L1, the MT-dictionary D 21 must also contain these types of informations.For a one-way translation system, this kind of dictionary structure may seem to be quite suitable. However, for a bi-directional language translation system, this kind of dictionary structure requires almost identical information to be kept in two different storages which is redundant and undesirable. A new structure for the dictionary is thus required. Bearing these points in mind, we proposed a structure for the MT-dictionary of a multi-language translation system.The basic organization of the proposed MT-dictionary is illustrated in Fig. 2 .1. The two main components of the dictionary are the DICTIONARY ADMINSTRATOR and the n SUB-DICTIONARIES. Each of the n SUB-DICTIONARIES contains the information on the lexical items of one of the n languages concerned.Each entry of items of a SUB-DICTIONARY specifies the codings of a lexical item, its grammatical information and (n-1) pointers which point to the entries of the other (n-1) SUB-DICTIONARIES where the target equivalences of the lexical item can be found respectively (Fig. 2. 2). An implementation of this dictionary structure is the dictionary of the Dual Language Translator (DLT) for the translation between Chinese and English. This dictionary consists of a DICTIONARY ADMINISTRATOR, a Chinese SUB-DICTIONARY and an English SUB-DICTIONARY ( Fig.2.3 ). The actual organization of a SUB-DICTIONARY will be discussed in the following section. Basically, there are three main types of information in a SUB-DICTIONARY, namely, CONTROL INFORMATION, SYNTACTIC/ SEMANTIC ITEMS and a COMMON DATA-POOL ( Fig. 3.1 ).The CONTROL INFORMATION specifies the identification of the The reason for separating these items from the others is to speed up the translation process. During the translation process, these items will be kept in computer main memory.Thus dictionary consultation for these items will not be necessary, consequently reduce the times for lexical analysis. Due to the limitation of the size of computer main memory, the number of these special items is limited. Lexical information records are used for lexical analysis.A traditional method for representing lexical information is by means of a linear list such as illustrated in Fig. 3 .2.The disadvantages of this method are that duplication of lexical information exists and the search for a lexical item may have to be carried out linearly. We can rewrite the same list in Fig. 3 .2 into a tree (Fig. 3.3) , and a linked list representation of such a tree is given in Fig. 3 .4. It is by linked list that the lexical information in the SUB-DICTIONARIES are represented. The format of the lexical information records is illustrated in Fig. 3 .5. The format of this type of records is illustrated in Fig. 3 .6.The associated information records are used for determining the particular properties of the items, such as special article or measure word required, etc. and their format is illustrated in Fig. 3 .7.The COMMON DATA-POOL is a set of data which will be used by all items or a subset of items. For example, articles, measure words, prefix and postfix etc. Fig. 3 .8 illustrates the record format of COMMON DATA-POOL. Consider the Chinese lexical item " (4282 4496)".(1). This item can be noun or a verb.(2). If it is a noun then it has one meaning, and can be assigned the semantic category NA (non-animate).For this particular meaning, the item has an associated information which specifies that it requires the particular measure word " (7309)".(3). If it is a verb then it has one meaning, and can be assigned the semantic category HA (humanized action).For this particular meaning, the item does not have an associated information.According to the above specification, the lexical information record, grammatical and target information record and the associated information record of the item will be as shown in Fig. 4.1(a) .The COMMON DATA-POOL of the Chinese SUB-DICTIONARY may be a set of Chinese characters which might be necessary to be inserted into the Chinese sentence.For example, the Chinese character " (0001)", " (0020)", measure word " (7309)" etc.Example 2.Consider the English lexical item "STUDY".(1). This item can be a noun or a verb.(2). If it is a noun then it has one meaning, and can be assigned the semantic category NA (non-animate).For this particular meaning, the item has an associated information which specifies that the plural form of the item is "STUD + IES".(3). If it is a verb then it has one meaning, and can be assigned the semantic category HA (humanized action).For this particular meaning, the item has a, set of associated information which specify how the various form of the item can be constructed.According to the above specification, the lexical information record, grammatical and target information record and the associated information record of the item will be as shown in Fig 4.1(b) .The COMMON DATA-POOL of the English SUB-DICTIONARY may be a set of characters which might be necessary to be inserted into the English words or sentence.For example, " S, ISS, D, ED, ING " etc. that is, an item in one SUB-DICTIONARY may have one and more than one equivalences in another SUB-DICTIONARY. introduction: The importance and structure of MT-dictionary were discussed extensively by many researchers in machine translation in the past (Knowles 1982 , Lamb and Jacobsen 1966 , Liu 1982 , Loh 1975 , Oettinger 1960 , Wang 1982 , Wang, T'sou and Chan 1971 . These structures were mainly concerned with MT-dictionaries for one-way translation systems. Dictionary structures for bi-directional or multi-language machine translation systems were rarely discussed. The aim of this paper is to introduce a dictionary structure suitable for multi-language translation system. The dictionary structure was designed in conjunction with the Dual Language Translator (DLT) developed at the Chinese University of Hong Kong in 1978 (Loh, Hung and Kong 1978) . Appendix:
null
null
null
null
{ "paperhash": [ "wang|research_on_chinese-english_machine_translation.", "heineck|automatic_language_translation.", "knowles|the_pivotal_role_of_the_various_dictionaries_in_an_mt_system", "lamb|a_high-speed_large-capacity_dictionary_system" ], "title": [ "Research on Chinese-English Machine Translation.", "AUTOMATIC LANGUAGE TRANSLATION.", "The pivotal role of the various dictionaries in an MT system", "A high-speed large-capacity dictionary system" ], "abstract": [ "Abstract : The report documents results of a 13-month effort in Chinese-English machine translation R and D. Main emphasis was placed on design of automatic lookup system for segmentation of Chinese test into units of meaning, and design of automatic syntactic analysis system for recognition of Chinese sentence structure. The following tasks were progressing concurrently: further compilation of lexical data with refined grammar codes, and continuing sophistication of rules for automatic syntactic analysis. Completion of Syntactic Analysis System (SAS) and associated subroutines constitutes a major achievement. Continuation phase will be devoted mainly to interlingual transfer problem and synthesis in English, culminating in design of a prototype system for Chinese-English machine translation. (Author)", "Abstract : This report documents the work performed in automatic Russian-English translation. The objective of this contract was to extend and improve capabilities of the USAF-IBM translation system based on limited-environment program. Linguistic studies, implementation research and lexicographic work were performed to achieve this objective. Linguistic studies consisted in development of more extensive grammatical and syntactic information for inclusion in the Photostore Lexicon. As a part of this effort, adjectival entries in this dictionary were provided with government tags, and a detailed study of grammatical and semantic properties of Russian nouns was carried out. (Author)", "In a coil support frame of a winding machine having a pair of centering discs mounted in a position opposing one another and mutually spaced-apart so as to clamp a coil core therebetween, there is included a braking device operatively connected to at least one of the centering discs and actuable to brake rotation of the one centering disc and to apply pressure to the one centering disc in direction toward the other centering disc so as to reinforce the clamping force exerted by the pair of opposing centering discs on a coil core clamped therebetween.", "This paper describes a method of adapting dictionaries for use by a computer in such a way that comprehensiveness of vocabulary coverage can be maximized while look-up time is minimized. Although the programming of the system has not yet been completed, it is estimated at the time of writing that it will allow for a dictionary of 20,000 entries or more, with a total look-up time of about 8 milliseconds (.008 seconds) per word, when used on an IBM 704 computer with 32,000 words of core storage. With a proper system of segmentation, a dictionary of 20,000 entries can handle several hundred thousand different words, thus providing ample coverage for a single fairly broad field of science. Although the system has been designed specifically for purposes of machine translation of Russian, it is applicable to other areas of linguistic data processing in which dictionaries are needed." ], "authors": [ { "name": [ "William S-Y. Wang", "Ching-Yi Dougherty", "Herbert Doughty", "C. Johnson", "Sally H. Lee" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Arthur W. Heineck", "George W. Tarnawsky" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "F. Knowles" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Lamb", "W. Jacobsen" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null ], "s2_corpus_id": [ "62722314", "60696142", "51772278", "26800633" ], "intents": [ [], [ "background" ], [ "background" ], [ "background" ] ], "isInfluential": [ false, false, false, false ] }
null
492
0
null
null
null
null
null
null
null
null
7fa3a081cb7ccd3130c5b5cdaedbdfde55f46882
237558705
null
A software system for describing a grammar of machine translation: {GRADE}
A new software system for describing a grammar of a machine translation system has been developed. This software system is called GRADE (GRAmmar DEscriber). GRADE has the following features:
{ "name": [ "Nakamura, Jun-ichi and", "Nagao, Makoto" ], "affiliation": [ null, null ] }
null
null
Proceedings of the International Conference on Methodology and Techniques of Machine Translation: Processing from words to language
1984-02-01
0
1
null
null
GRADE allows a grammar writer to divide a whole grammar into several parts. Each part of the grammar is called a subgrammar. A subgrammar describes a step of the translation process. A whole grammar is then described by a network of sub-grammars.This network is called a subgrammar network. A subgrammar network allows a grammar writer to control the process of the translation precisely.When a subgrammar network in the analysis phase consists of a subgrammar for a noun-phrase (SG1) and a subgrammar for a verb-phase (SG2) in this sequence, the subgrammar network first applies SG1 to an input sentence, then applies SG2 to the result of an application of SG1, thus getting a syntactic structure for the input sentence.A subgrammar consists of a set of rewriting rules. Rewriting rules in a subgrammar are applied for an input sentence in an appropriate order, which is specified in the description of the subgrammar. A rewriting rule transforms a tree structure into another tree structure. Rewriting rules use a powerful pattern matching algorithm to test their applicability to a tree structure. For example, a grammar writer can write a pattern that recognizes and parses an arbitrary numbers of sub-trees. Each node of a tree-structure has a list of pairs of a property name and a property value. A node can express a category name, a semantic marker, flags to control the translation process, and various other information. This tree-to-tree transformation operation by GRADE allows a grammar writer to describe all the processes of analysis, transfer and generation of a machine translation system with this uniform description capability of GRADE.A subgrammar network or a subgrammar can be written in an entry of the dictionaries for a machine translation system. A subgrammar network or a subgrammar written in a dictionary entry is called a dictionary rule, which is specific for a word. When an input sentence contains a word which has a dictionary rule, it is applied to an input sentence at an appropriate point of a translation process. It can express more precise processing appropriate for that specific word that a general Subgrammar Network or Subgrammar. it also allows grammar writers to adjust a machine translation system to a specific domain easily.
null
M-382 and Symbolics 3600.GRADE is used in the machine translation system between Japanese and English. The project was started by the Japanese government in 1982. The effectiveness of GRADE has been demonstrated in the project.
null
Main paper: grade is written in lisp. grade is implemented on facom: M-382 and Symbolics 3600.GRADE is used in the machine translation system between Japanese and English. The project was started by the Japanese government in 1982. The effectiveness of GRADE has been demonstrated in the project. 1.: GRADE allows a grammar writer to divide a whole grammar into several parts. Each part of the grammar is called a subgrammar. A subgrammar describes a step of the translation process. A whole grammar is then described by a network of sub-grammars.This network is called a subgrammar network. A subgrammar network allows a grammar writer to control the process of the translation precisely.When a subgrammar network in the analysis phase consists of a subgrammar for a noun-phrase (SG1) and a subgrammar for a verb-phase (SG2) in this sequence, the subgrammar network first applies SG1 to an input sentence, then applies SG2 to the result of an application of SG1, thus getting a syntactic structure for the input sentence.A subgrammar consists of a set of rewriting rules. Rewriting rules in a subgrammar are applied for an input sentence in an appropriate order, which is specified in the description of the subgrammar. A rewriting rule transforms a tree structure into another tree structure. Rewriting rules use a powerful pattern matching algorithm to test their applicability to a tree structure. For example, a grammar writer can write a pattern that recognizes and parses an arbitrary numbers of sub-trees. Each node of a tree-structure has a list of pairs of a property name and a property value. A node can express a category name, a semantic marker, flags to control the translation process, and various other information. This tree-to-tree transformation operation by GRADE allows a grammar writer to describe all the processes of analysis, transfer and generation of a machine translation system with this uniform description capability of GRADE.A subgrammar network or a subgrammar can be written in an entry of the dictionaries for a machine translation system. A subgrammar network or a subgrammar written in a dictionary entry is called a dictionary rule, which is specific for a word. When an input sentence contains a word which has a dictionary rule, it is applied to an input sentence at an appropriate point of a translation process. It can express more precise processing appropriate for that specific word that a general Subgrammar Network or Subgrammar. it also allows grammar writers to adjust a machine translation system to a specific domain easily. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
492
0.002033
null
null
null
null
null
null
null
null
e11ce10eff56df1729626bda3ef6109566f779c9
64829188
null
A General Computational Model for Word-Form Recognition and Production
Th e f o r m a l i s m of g e n e r a t i v e p h o n o l o g y has b e e n w i d e l y us e d since its i n t r o d u c t i o n in the 1960's. T h e g e n e r a t i v e f o r m a l i s m is g e n e r a l e n o u g h to b e a p p l i e d to t h e m o r p h o l o g y o f a n y l a n g u a g e , a n d i t s r u l e s a r e s t a t e d in l i n g u i s t i c a l l y r e l e v a n t t e r m s . T h e m o r p h o l o g y of a l a n g u a g e is d e s c r i b e d b y a s e t of rules w h i c h st ar t f r o m a u n d e r l y i n g l e x i c a l r e p r e s e n t a t i o n , and t r a n s f o r m it st ep by s t ep u n t i l the s u r f a c e r e p r e s e n t a t i o n is r e a c h e d . S o -c a l l e d a b s t r a c t p h o n o l o g y i n s i s t s o n i n v a r i a n t le x i c a l r e p r e s e n t a t i o n s for m o r p h e m e s , an d thus all v a r i a t i o n s a m o n g d i s t i n c t s u r f a c e f o r m s m u s t b e a c c o u n t e d fo r b y r u l e s . T h i s h a s Led to a n e e d f o r r e g u l a t i n g t h e o r d e r in w h i c h t h e rules m a y be applied. Th e g e n e r a t i v e f o r m a l i s m is c o n c e p t u a l l y u n i d i r e c t i o n a l , b e c a u s e o n l y the p r o d u c t i o n of w o r d -f o r m s is g u a r a n t e e d to be s t r a i g h t forward. A s the ru le s are a p pl ie d, t h e y s o m e t i m e s d e f o r m the c o n t e x t of ot h e r rules. B a c k w a r d s a p p l i c a t i o n of ru le s
{ "name": [ "Koskenniemi, Kimmo" ], "affiliation": [ null ] }
null
null
Proceedings of the 4th Nordic Conference of Computational Linguistics ({NODALIDA} 1983)
1984-05-01
14
626
null
null
null
null
null
rules m a y be applied.rule applications.A General Computational Model for Word-Form Recognition and Production Kimmo Koskenniemi Proceedings of NODALIDA 1983, pages 145-154 f i r s t r u l e -a u t o m a t o n is to p e r m i t the pair e if and o n l y if the p l u r a l I fo llows. T h e f o l l o w i n g a u t o m a t o n w i t h t h r e e s t a t e s (1, 2, 3 ) p e r f o r m s this: a t e 1 is the i n i t i a l st a t e of the a u t o m a t o n . If the a u t o m a t o n r e c e i v e s p a i r s o t h e r t h a n ^ or \ it w i l l r e m a i n in s t a t e 1 n l y t r a n s i t i o n a r c l a b e l l e d w i t h I s t a r t s f r o m 2). R e c e i v i n g t h i s p a i r l e a d s to s t a t e 4, w h i c h is n o n f i n a l b e c a u s e the right c o n t e x t m u s t a l s o be satisfied. T h e o n l y e s c a p e f r o m s t a t e 4 is v i a a s u r f a c e v o w e l , a n y t h i n g e l s e t e r m i n a t a u t o m a t o n (1, 2, 4 EQUATIONS t
Main paper: le x i c a l r e p r e s e n t a t i o n s for m o r p h e m e s , an d thus all v a r i a t i o n s a m o n g d i s t i n c t s u r f a c e f o r m s m u s t b e a c c o u n t e d fo r b y r u l e s . t h i s h a s led to a n e e d f o r r e g u l a t i n g t h e o r d e r in w h i c h t h e: rules m a y be applied.rule applications.A General Computational Model for Word-Form Recognition and Production Kimmo Koskenniemi Proceedings of NODALIDA 1983, pages 145-154 f i r s t r u l e -a u t o m a t o n is to p e r m i t the pair e if and o n l y if the p l u r a l I fo llows. T h e f o l l o w i n g a u t o m a t o n w i t h t h r e e s t a t e s (1, 2, 3 ) p e r f o r m s this: a t e 1 is the i n i t i a l st a t e of the a u t o m a t o n . If the a u t o m a t o n r e c e i v e s p a i r s o t h e r t h a n ^ or \ it w i l l r e m a i n in s t a t e 1 n l y t r a n s i t i o n a r c l a b e l l e d w i t h I s t a r t s f r o m 2). R e c e i v i n g t h i s p a i r l e a d s to s t a t e 4, w h i c h is n o n f i n a l b e c a u s e the right c o n t e x t m u s t a l s o be satisfied. T h e o n l y e s c a p e f r o m s t a t e 4 is v i a a s u r f a c e v o w e l , a n y t h i n g e l s e t e r m i n a t a u t o m a t o n (1, 2, 4 EQUATIONS t Appendix:
null
null
null
null
{ "paperhash": [ "koskenniemi|two-level_model_for_morphological_analysis", "and|a_process_model_of_morphology_and_lexicon", "koskenniemi|a_general_computational_model_for_word-form_recognition_and_production" ], "title": [ "Two-Level Model for Morphological Analysis", "A PROCESS MODEL OF MORPHOLOGY AND LEXICON", "A General Computational Model for Word-Form Recognition and Production" ], "abstract": [ "This paper presents a new linguistic, computationally implemented model for morphological analysis and synthesis. It is general in the sense that the same language independent algorithm and the same computer program can operate on a wide range of languages, including highly inflected ones such as Finnish, Russian or Sanskrit. The new model is unrestricted in scope and it is capable of handling the whole language system as well as ordinary running text. A full description for Finnish has been completed and tested, and the entries in the Dictionary of Modern Standard Finnish have been converted into a format compatible with it. \n \nThe model is based on a lexicon that defines the word roots, inflectional morphemes and certain nonphonological alternation patterns, and on a set of parallel rules that define phonologically oriented phenomena. The rules are implemented as parallel finite state automata, and the same description can be run both in the producing and in the analyzing direction.", "The past 15 years have witnessed a steadily growing interest in morphology and lexicon. During the generative hey-days morphology was mostly reduced to phonology while the lexicon was seen just äs a minimal list of idiosyncratic properties of lexical entries. More recent views stress the üreducibility of morphology and attribute more structure and a more active role to the lexicon. Several comprehensive theories or more sepcific models have been proposed for describing the structure and interplay of morphology and lexicon. Many of these approaches are purely autonomous accounts of (parts of) the language System in the Saussurean or Chomskyan sense. Such are the natural,semiotically based morphology of MAYERTHALER (1981), DRESSLEB (1981), and WURZEL (1984), äs well äs the lexical morphology of KEPARSKY (1982) and others. By definition, these approaches pay little or no attention to properties of language use such äs the processing of word-form tokens or the import of frequency of occurrence. Other approaches are outspokenly behavioral. These stress the primacy of behavioral data over autonomous theorizing, at least äs a starting-point of psycholinguistics and psychology of language. Here belong e.g. several models of word-recognition such äs MORTON'S (1969) logogen model, FORSTER'S (1976) active search model, and the cohort model of MARSLEN—WILSON and TYLER (e.g. 1980, 1981). These models are based on genuine experimental work and have little in common with autonomous morphology. Third, one may try to integrate autonomous analysis and performance data. A pertüient example is BYBEE and SLOBIN'S (1982) morphological Schemata which are based partly on autonomous analysis, partly on psycholinguistic evidence. The Schemata are claimed to be units used in accessing the lexicon.", "The formalism of generative phonology has been widely used since its introduction in the 1960's. The generative formalism is general en ou gh to be ap pl ie d to the m o r p h o l o g y of any l a n ­ guage, and its rules are stated in l i n g u i s t i c a l l y rele va nt terms. The m o r p h o l o g y of a lang ua ge is d e s c r i b e d by a set of rules which start from a underlying lexical representation, and transform it step by step until the surface representation is reached. So -c al le d abst ra ct p h o n o l o g y insists on in va ri an t lexical representations for morphemes, and thus all variations a m o n g di s t i n c t surface f o rm s m u s t be a c c o u n t e d for by rules. This has Led to a need for re g u l a t i n g the order in w h i c h the rules may be applied. The generative formalism is conceptually unidirectional, because only the production of word-forms is guaranteed to be straight forward. As the rules are applied, they sometimes de­ form the context of other rules. Backwards application of rules would require either foresight or extensive trials of tentative rule applications. The generative formalism has proven to be computationally difficult, and therefore it has found little use in morphologi­ cal programs. Until recently only simulators of rules have been w r i t t e n and used for testing p h o n o l o g i c a l d e s c r i p t i o n s or in the teaching of phonology." ], "authors": [ { "name": [ "K. Koskenniemi" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Fred KARLSSON and", "K. Koskenniemi" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "K. Koskenniemi" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null ], "s2_corpus_id": [ "2816585", "143579218", "12819449" ], "intents": [ [ "methodology" ], [], [ "methodology" ] ], "isInfluential": [ false, false, false ] }
null
489
1.280164
null
null
null
null
null
null
null
null
38d4ef9177e1474d2ba40ef017f4fb7e5fc62092
20714455
null
Parsing basert p{\aa} {LFG}: Et {MIT}/Xerox-system applisert p{\aa} norsk (Parsing based on {LFG}: A {MIT}/Xerox system applied on {N}orwegian) [In {N}orwegian]
Parsing basert på L F G : Et MlT/Xerox-system applisert på norsk Det vi har å legge frem her idag, er ikke egne forskningsresultater, men snarere en rapport om et parsing-prosjekt for norsk som vi er i ferd med å sette i gang i Bergen, og en presentasjon av hovedtrekkene i det språkanalysesystemet prosjektet anvender. «Vi» i denne sammenheng er NAVFs EDB-senter for humanistisk forskning ved Knut Hofland, og Institutt for fonetikk og lingvistikk ved Helge Dyvik. Vi regner også med å kunne knytte flere personer til dette prosjektet i tiden som kommer. Grunnlaget for parsing-prosjektet er det utviklingsarbeid som ble utført ved NAVFs EDB-senter av Per-Kristian Halvorsen, som hadde et forskerstipendium der frem til august 1983. Halvorsen implemen terte et universelt språkanalysesystem som er utviklet i samarbeid mellom forskere ved MIT og Xerox Pare i California -bl.a. Halvorsen selv -og begynte arbeidet med å bygge opp en norsk parser innenfor rammen av dette systemet. Halvorsen er nettopp begynt i en ny stilling hos Xerox i California, og kunne derfor ikke selv vaere til stede her for å presentere sitt arbeid. Det norsk-fragmentet som hittil er bygget opp, er meget begrenset. Det omfatter enkle aktive deklarative setninger som kan inneholde infinitivkomplementer kontrollert av subjekt eller objekt (f.eks. «Per lovet Kari å komme»), og et rudimentaert leksikon. I det videre arbeidet vil vi i første omgang konsentrere oss om verbalsystemet, naermere bestemt systemet av perifrastiske konstruksjoner og de modale og aspektuelle kategoriene de uttrykker, om utvidelse av leksikon, og dernest om langdistanseavhengigheter av den typen vi finner i /iv-spørsmål, relativsetninger og topikaliserte konstruksjoner. Siktemålet er å øke parserens dekningsgrad til å omfatte sentrale konstruksjonstyper og et større ordforråd, og etter hvert å aktivisere noen av fakultetets språkmiljøer i dette arbeidet. Analysesystemet burde ligge godt til rette for dette, som vi skal se. Vi håper også å kunne knytte arbeidet sammen med noe av den mer anvendelses-orienterte forskningen som skjer ved andre institutter ved Universitetet i Bergen. Det språkanalysesystemet som benyttes, kan karakteriseres som «lingvistvennlig». Det er utviklet under ledelse av Joan Bresnan ved MIT og Ronald M. Kaplan ved Xerox Pare. Den interne representasjonen av grammatikken er en nettverk-struktur, og parseralgoritmen bygger på Kaplans «General Syntactic Processor». Men systemet inneholder også en gramatikktolker som fritar brukeren fra å formulere sin grammatiske beskrivelse som transisjons-nettverk. Grammatiske beskrivelser kan skrives direkte inn i form av regler innenfor grammatikkmodellen «leksikalsk-funksjonell grammatikk» (LFG), og grammatikktolkeren oversetter så beskrivelsen til den mer maskin-motiverte nettverk-strukturen som parser algoritmen refererer til. LFG er en lingvistisk motivert modell med formelle egenskaper som stort sett er kjent fra moderne lingvistisk tradisjon. Dette legger forholdene til rette for at språkforskere uten spesiell interesse for parsingteori kan knyttes til datalingvistiske prosjekter. LFG er en transformasjonsfri grammatikk-modell. Modellen skiller dermed ikke mellom dypstruktur og overflatestruktur i konstituent-analysen. Fenomener som EQUI, eller PRO-kontroll i nyere versjoner av Chomskyansk syntaks -f.eks, identifikasjonen av subjektet i «Per lovet Kari å synge» som det underforståtte subjekt for infinitiven -beskrives i leksikon som en opplysning om kontrollegenskapene ved det overordnede verbet. (PRO-analysen i EST opererer riktignok heller ikke med noen transforma sjon; men den antar et tomt PRO som det syntaktiske subjekt for infinitiven, noe LFG-analysen unngår.) På tilsvarende måte behandles passiv i leksikon som en redundansregel, som i praksis innebaerer at aktiv og passiv form av samme verb blir to ulike leksikalske oppslag med identisk semantisk form, men med ulike valg av nominale ledd som argumenter. Langdistanse-avhengigheter som de vi finner i //v-spørsmål.
{ "name": [ "Dyvik, Helge and", "Hofland, Knut" ], "affiliation": [ null, null ] }
null
null
Proceedings of the 4th Nordic Conference of Computational Linguistics ({NODALIDA} 1983)
1984-05-01
1
0
null
null
relativsetninger osv. (f.eks. «Hvem påstod Per at Kari ikke likte at han kjente?») lar seg neppe behandle leksikalsk; de ivaretas ved hjelp av en spesiell type korresponderende variabler på den kontrollerende konstituenten («hvem») og den kontrollerte tomme plassen (etter «kjente»).Dermed kan grammatikkens kontekstfrie frasestrukturregler generere konstituentstrukturer som direkte korresponderer med den observerte streng av former. Analysesystemet tillater at grammatikken skrives direkte inn i form av slike regler. Vi får da konstituentstrukturer av vanlig type:( (3) Disse strukturene er da alle generert av kontekstfrie frasestrukturregler. Det innebaerer f.eks, at (1) og (2) ikke er relatert gjennom den syntaktiske derivasjonen, og at de syntaktiske regiene heller ikke relaterer «Per» i (3) til noen tom subjektplass foran «å synge». For å uttrykke disse relasjonene, og dermed få et brukbart utgangspunkt for en semantisk fortolkning, må disse enkle strukturrepresentasjonene suppleres med ytterligere uttrykksmidler. Dette er også nødvendig for å eliminere ugrammatikalske setninger som f.eks. *«Per aksepterer Kari å synge», en setning som frasestrukturreglene alene vil generere hvis de først genererer (3). Disse ytterligere uttrykksmidlene er grammatiske funksjoner og grammatiske trekk.LFG behandler grammatiske funksjoner som SUBJEKT, OBJEKT osv. som primitiver, og ikke som størrelser som nødvendigvis skal vaere konfigurasjonelt definerbare, slik tilfellet er innenfor EST. De funksjonelle termene SUBJEKT, OBJEKT, osv. i LFG har ikke noen selvstendig interpretasjon, men fungerer bare som et grunnlag for oversettelsen fra syntaktisk til semantisk representasjon -det vil si, de bidrar til å knytte forbindelsen mellom syntaktiske konstituenter og semantiske argumentposkjoner. Dessuten har de en viktig funksjon i å filtrere bort ugrammatikalske frasestrukturer, som vi skal se. Dels gjennom leksikon og den morfologiske analysen og dels gjennom frasestrukturreglene biir da de syntaktiske traefne supplert med funksjoner og trekk som f.eks, opplyser om at «lingvisten» er SUBJEKT i (1) mens «datamaskiner» er OBJEKT, og videre at «lingvisten» er MASKULINUM, SINGULARIS, DEFINITT. Denne informasjonen kunne vi føye inn i traerne: I eksempel 2 har vi også fått ut den semantiske struktur. Ved å peke på forskjellige nivåer i frasetrukturtreet og den funksjonelle struktur vil vi få ut deler av den funksjonelle struktur og den semantiske struktur. Dersom vi peker på et ord vil vi få ut dette ordets innførsel i leksikon. Denne kan vi eventuelt rette og deretter kjøre analysen på ny. Tilsvarende kan vi få ut og rette de grammatiske regiene ved å peke på regelnavnet midt på venstre skjermhalvdel. Dette er vist i eksempel 3 (rule window og lexicon window).I eksempel 4 ser vi hvorledes det går med en ugrammatisk setning. Vi får her et frasestrukturtre, men ingen konsistent f-struktur. Ved å peke på merkelappen INCONSISTENT viser systemet den inkonsistente f-struktur og angir hvilken funksjonell ligning som ikke er oppfylt (f36 INF). Utsnitt av de funksjonelle ligninger vises i midterste vindu nederst på skjermen. Vi har også kalt opp regelen VP' og ser hvor den aktuelle ligningen står i grammatikken.Proceedings of NODALIDA 1983
null
null
null
Main paper: : relativsetninger osv. (f.eks. «Hvem påstod Per at Kari ikke likte at han kjente?») lar seg neppe behandle leksikalsk; de ivaretas ved hjelp av en spesiell type korresponderende variabler på den kontrollerende konstituenten («hvem») og den kontrollerte tomme plassen (etter «kjente»).Dermed kan grammatikkens kontekstfrie frasestrukturregler generere konstituentstrukturer som direkte korresponderer med den observerte streng av former. Analysesystemet tillater at grammatikken skrives direkte inn i form av slike regler. Vi får da konstituentstrukturer av vanlig type:( (3) Disse strukturene er da alle generert av kontekstfrie frasestrukturregler. Det innebaerer f.eks, at (1) og (2) ikke er relatert gjennom den syntaktiske derivasjonen, og at de syntaktiske regiene heller ikke relaterer «Per» i (3) til noen tom subjektplass foran «å synge». For å uttrykke disse relasjonene, og dermed få et brukbart utgangspunkt for en semantisk fortolkning, må disse enkle strukturrepresentasjonene suppleres med ytterligere uttrykksmidler. Dette er også nødvendig for å eliminere ugrammatikalske setninger som f.eks. *«Per aksepterer Kari å synge», en setning som frasestrukturreglene alene vil generere hvis de først genererer (3). Disse ytterligere uttrykksmidlene er grammatiske funksjoner og grammatiske trekk.LFG behandler grammatiske funksjoner som SUBJEKT, OBJEKT osv. som primitiver, og ikke som størrelser som nødvendigvis skal vaere konfigurasjonelt definerbare, slik tilfellet er innenfor EST. De funksjonelle termene SUBJEKT, OBJEKT, osv. i LFG har ikke noen selvstendig interpretasjon, men fungerer bare som et grunnlag for oversettelsen fra syntaktisk til semantisk representasjon -det vil si, de bidrar til å knytte forbindelsen mellom syntaktiske konstituenter og semantiske argumentposkjoner. Dessuten har de en viktig funksjon i å filtrere bort ugrammatikalske frasestrukturer, som vi skal se. Dels gjennom leksikon og den morfologiske analysen og dels gjennom frasestrukturreglene biir da de syntaktiske traefne supplert med funksjoner og trekk som f.eks, opplyser om at «lingvisten» er SUBJEKT i (1) mens «datamaskiner» er OBJEKT, og videre at «lingvisten» er MASKULINUM, SINGULARIS, DEFINITT. Denne informasjonen kunne vi føye inn i traerne: I eksempel 2 har vi også fått ut den semantiske struktur. Ved å peke på forskjellige nivåer i frasetrukturtreet og den funksjonelle struktur vil vi få ut deler av den funksjonelle struktur og den semantiske struktur. Dersom vi peker på et ord vil vi få ut dette ordets innførsel i leksikon. Denne kan vi eventuelt rette og deretter kjøre analysen på ny. Tilsvarende kan vi få ut og rette de grammatiske regiene ved å peke på regelnavnet midt på venstre skjermhalvdel. Dette er vist i eksempel 3 (rule window og lexicon window).I eksempel 4 ser vi hvorledes det går med en ugrammatisk setning. Vi får her et frasestrukturtre, men ingen konsistent f-struktur. Ved å peke på merkelappen INCONSISTENT viser systemet den inkonsistente f-struktur og angir hvilken funksjonell ligning som ikke er oppfylt (f36 INF). Utsnitt av de funksjonelle ligninger vises i midterste vindu nederst på skjermen. Vi har også kalt opp regelen VP' og ser hvor den aktuelle ligningen står i grammatikken.Proceedings of NODALIDA 1983 Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
489
0
null
null
null
null
null
null
null
null
cbfda25b1ffb2505e6c468b0efcbb4062a9df5a7
27029347
null
A Computational Model of {F}innish Sentence Structure
A COMPUTATIONAL MODEL OF FINNISH SENTENCE STRUCTURE 1 Introdjcbion The present paper propounds an outline of a computational model of Finnish sentence structures. Although we focus on Finnish we feel that the ideas behind the model might be applicable to other languages as well, in particular to other inflectional free word order languages. A parser based on this model is being implemented as a component of a larger system, namely a natural language data base interface. There it will follow a component of morphological analysis (see JSppinen et al C83); hence, throughout the present paper it is assumed that all relevant morpho logical and lexical information is computationally available for all words in a sentence. Even though we have a data base application in mind, sen tence analysis will be based on general linguistic knowledge. All applicatio-. dependent inferences are left to subsequent modules which are not discussed here. '-11 ^2 or its linear equivalent, (3') Ci(Ci^ * Ci^),
{ "name": [ "Nelimarkka, Esa and", "J{\\\"a}ppinen, Harri and", "Lehtola, Aarno" ], "affiliation": [ null, null, null ] }
null
null
Proceedings of the 4th Nordic Conference of Computational Linguistics ({NODALIDA} 1983)
1984-05-01
8
2
null
The sentence "Nuorena poika heitti kiekkoa" ("As young, the boy (used to) throw the discus"), for example, will be given the structure( 1) The above feature explains why no registers are needed in our approach.We have outlined a model of Finnish which is based on 2-way structure building transition networks. We have, as the above illustration exhi bits, specified our model with a kind of production-rule formalism.A compiler which compiles such descriptions into LISP is under construction. This LISP-code is further compiled into a directly execu table code so that no interpretation of the productions or production packets of the grammar is necessary. That is, most of the linguistic knowledge is put into active form. We hope to get Implementational results in early spring 1984.176 Proceedings of NODALIDA 1983Proceedings of NODALIDA 1983Proceedings of NODALIDA 1983
null
null
null
null
Main paper: : The sentence "Nuorena poika heitti kiekkoa" ("As young, the boy (used to) throw the discus"), for example, will be given the structure( 1) The above feature explains why no registers are needed in our approach.We have outlined a model of Finnish which is based on 2-way structure building transition networks. We have, as the above illustration exhi bits, specified our model with a kind of production-rule formalism.A compiler which compiles such descriptions into LISP is under construction. This LISP-code is further compiled into a directly execu table code so that no interpretation of the productions or production packets of the grammar is necessary. That is, most of the linguistic knowledge is put into active form. We hope to get Implementational results in early spring 1984.176 Proceedings of NODALIDA 1983Proceedings of NODALIDA 1983Proceedings of NODALIDA 1983 Appendix:
null
null
null
null
{ "paperhash": [ "anderson|the_grammar_of_case:_towards_a_localistic_theory", "hudson|arguments_for_a_non-transformational_grammar" ], "title": [ "The Grammar of Case: Towards a Localistic Theory", "Arguments for a Non-Transformational Grammar" ], "abstract": [ "Part I. Preliminaries: 1. Introduction 2. A sketch of grammar Part II. Nominative and Ergative: 3. Nominative 4. Ergative 5. Nominative, ergative and causatives Part III. Locative and Ablative: 6. Locative 7. Abstract location 8. Ablative 9. Abstract direction Part IV. Interlude: 10. Sequencing Part V. 'Local' and 'non-local': 11. Ablative and ergative, locative and nominative 12. Prospect and retrospect: Bibliography and abbreviations Index.", "For the past decade, the dominant transformational theory of syntax has produced the most interesting insights into syntactic properties. Over the same period another theory, systemic grammar, has been developed very quietly as an alternative to the transformational model. In this work Richard A. Hudson outlines \"daughter-dependency theory,\" which is derived from systemic grammar, and offers empirical reasons for preferring it to any version of transformational grammar. The goal of daughter-dependency theory is the same as that of Chomskyan transformational grammar to generate syntactic structures for all (and only) syntactically well-formed sentences that would relate to both the phonological and the semantic structures of the sentences. However, unlike transformational grammars, those based on daughter-dependency theory generate a single syntactic structure for each sentence. This structure incorporates all the kinds of information that are spread, in a transformational grammar, over to a series of structures (deep, surface, and intermediate). Instead of the combination of phrase-structure rules and transformations found in transformational grammars, daughter-dependency grammars contain rules with the following functions: classification, dependency-marking, or ordering. Hudson's strong arguments for a non-transformational grammar stress the capacity of daughter-dependency theory to reflect the facts of language structure and to capture generalizations that transformational models miss. An important attraction of Hudson's theory is that the syntax is more concrete, with no abstract underlying elements. In the appendixes, the author outlines a partial grammar for English and a small lexicon and distinguishes his theory from standard dependency theory. Hudson's provocative thesis is supported by his thorough knowledge of transformational grammar.\"" ], "authors": [ { "name": [ "John M. Anderson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Hudson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "58200925", "62158973" ], "intents": [ [], [] ], "isInfluential": [ false, false ] }
null
489
0.00409
null
null
null
null
null
null
null
null
d1460bfccc0ee9f3463b90399c3b6a965dd8180d
219300616
null
Regelaktivering i en parser f{\"o}r svenska ({SVE}.{UCP}) (Rule activation in a parser for {S}wedish ({SVE}.{UCP})) [In {S}wedish]
Inledning. Parsing är den process i vilken ett språkligt uttryck (ord, fras, sats, mening) tilldelas en lingvistisk beskrivning enligt en given
{ "name": [ "S{\\aa}gvall Hein, Anna" ], "affiliation": [ null ] }
null
null
Proceedings of the 4th Nordic Conference of Computational Linguistics ({NODALIDA} 1983)
1984-05-01
0
0
null
null
null
null
null
null
Main paper: Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
489
0
null
null
null
null
null
null
null
null
3050476d7bdc5ee69b3b5d626fe390d58765c880
40032962
null
Inte bara idiom (Not only idioms) [In {S}wedish]
la en, o c h k o l l o k a t i o n e r n a h a r för m i g v a r i t en sådan g r u n d fråga. B l a n d m y c k e t a n n a t h a r de et t i n t r e s s a n t f ö r h å l l a n d e t i ll p a r s n i n g . J a g m i n n s m e d st o r i n t e n s i t e t m i n a i n tr yc k, n ä r jag såg de f ö r s t a r e s u l t a t e n av k o n k o r d a n s k ö r n i n g a r för g a n s k a m å n g a år sedan. De ö p p n a d e n y a s p r å k l i g a vyer. M a n såg i e t t huj v i lk en b e t y d e l s e k o l l o k a t i o n e r n a m å s t e ha i d e n s p r å k l i g a aktiviteten, D e t t a h a r v a r i t e n v i k t i g u t g å n g s p u n k t för s t ä n d i g t u p p d a t e r a de t a n k a r k r i n g k o l l o k a t i o n e r . En a v de f ö r s t a jag s t ö t t e på so m ö v e r h u v u d ta g e t h a d e t ä n k t i de h ä r b a n o r n a m e d d a t a m a s k i n e n i p e r s p e k t i v e t v a r John S i n c l a i r (1970). Ha n m e n a d e a t t m a n s k ul le ta fa s t a på v a d som är s t a t i s t i s k t s i g n i f i k a n t för a t t få f r a m k o l l o k a t i o n e r n a . De
{ "name": [ "All{\\'e}n, Sture" ], "affiliation": [ null ] }
null
null
Proceedings of the 4th Nordic Conference of Computational Linguistics ({NODALIDA} 1983)
1984-05-01
2
0
null
t i ll p a r s n i n g .n a t u r l i g t v i s in te är så en k e l t , ä v en o m d e t är en del av s a n n i n g e n .14 Inte bara idiom Sture Allén Proceedings of NODALIDA 1983, pages 14-20 är r e k u r r e n t a i de n m e n i n g s o m a n g e s i i n l e d n i n g e n t i l l f r e k v e ns or db ok en .ner. D e t v a r k a n s k e d e n f ö r s t a ö v e r v ä l d i g a n d e s i f f r a n fö r oss. Proceedings of NODALIDA 1983 t t a s v a r a r M a k k a i g e n o m at t ta f r a m a l l a o sk a d e l r e s u l t a t e n . V i u t a r b e t a d e e t t p a r a d i g m m ä r k t b a s l e x i kon på u n g e f ä r 8000 e n h e t e r . D e t är p u b l i c e r a t a v S t a f f a n H e l l be rg i en b o k f r å n 1978. B a s l e x i k o n e t u p p t a r stam, u p p s l a g s f o r m och p a r a d i g m n u m m e r o c h t ä c k e r d ä r m e d i p r i n c i p h e l a m o r f o l o g i n .på ny a ord. r e n å g r a so m p å o l i k a s ä t t h a r a r b e t a t m e d k o l l o k a t i o n e r . En av d e m är H a r a l d B u r g e r s o m h a r p u b l i c e rat en i n t r e s s a n t b o k o m i d i o m (1973), d ä r h a n f r a m f ö r a l l B r o w n -k o r p u s e n för a t t ta f r a m k o l l o k a t i o n s m a t e r i a l e t u r den. S y ft et är f r ä m s t a t t u t a r b e t a en f r a s o r d b o f r a m m e v i d L e x i k a l i s k d a t a b a s . D e t är det s t ö r s t a p r o j e k t e t v i d S p r å k d a t a för n ä r v a r a n d e . I d e t d e f i n i e r a r vi o m k r i n g 75 000 l e m m a n ur d e t m o d e r n a s v e n s k a s p r å k e t o c h g e r u p p g i f t e r a v m å n g a o l i k a slag, bl.a. just b e t r ä f f a n d e f r a s e o l o g i o c h id i o m a t i k . H ä r är e t t u t d r a g ur k o l l o k a t i o n s u p p g i f t e r n a r ö r a n d e l e m e t a s p r å k l i g a u t t r y c k e n är r i k t v a r i e r a d e o c h b a s e r
null
null
null
null
Main paper: 15: Proceedings of NODALIDA 1983 t t a s v a r a r M a k k a i g e n o m at t ta f r a m a l l a o sk a d e l r e s u l t a t e n . V i u t a r b e t a d e e t t p a r a d i g m m ä r k t b a s l e x i kon på u n g e f ä r 8000 e n h e t e r . D e t är p u b l i c e r a t a v S t a f f a n H e l l be rg i en b o k f r å n 1978. B a s l e x i k o n e t u p p t a r stam, u p p s l a g s f o r m och p a r a d i g m n u m m e r o c h t ä c k e r d ä r m e d i p r i n c i p h e l a m o r f o l o g i n .på ny a ord. r e n å g r a so m p å o l i k a s ä t t h a r a r b e t a t m e d k o l l o k a t i o n e r . En av d e m är H a r a l d B u r g e r s o m h a r p u b l i c e rat en i n t r e s s a n t b o k o m i d i o m (1973), d ä r h a n f r a m f ö r a l l B r o w n -k o r p u s e n för a t t ta f r a m k o l l o k a t i o n s m a t e r i a l e t u r den. S y ft et är f r ä m s t a t t u t a r b e t a en f r a s o r d b o f r a m m e v i d L e x i k a l i s k d a t a b a s . D e t är det s t ö r s t a p r o j e k t e t v i d S p r å k d a t a för n ä r v a r a n d e . I d e t d e f i n i e r a r vi o m k r i n g 75 000 l e m m a n ur d e t m o d e r n a s v e n s k a s p r å k e t o c h g e r u p p g i f t e r a v m å n g a o l i k a slag, bl.a. just b e t r ä f f a n d e f r a s e o l o g i o c h id i o m a t i k . H ä r är e t t u t d r a g ur k o l l o k a t i o n s u p p g i f t e r n a r ö r a n d e l e m e t a s p r å k l i g a u t t r y c k e n är r i k t v a r i e r a d e o c h b a s e r : t i ll p a r s n i n g .n a t u r l i g t v i s in te är så en k e l t , ä v en o m d e t är en del av s a n n i n g e n .14 Inte bara idiom Sture Allén Proceedings of NODALIDA 1983, pages 14-20 är r e k u r r e n t a i de n m e n i n g s o m a n g e s i i n l e d n i n g e n t i l l f r e k v e ns or db ok en .ner. D e t v a r k a n s k e d e n f ö r s t a ö v e r v ä l d i g a n d e s i f f r a n fö r oss. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
489
0
null
null
null
null
null
null
null
null
879f5a09da54bae3599c1f9ae7188a2532d9d512
42242994
null
Regelformalismer til brug ved datamatisk lingvistik (Rule formalisms for use in computational linguistics) [In {D}anish]
B e n t e M a e g a a r d , K ø b e n h a v n s U n i v e r s i t e t , I n s t i t u t for a n v e n d t og m a t e m a t i s k l i n g v x s t i k , N j a l s g a d e 96 2 3 0 0 K ø b e n h a v n S R e g e l f o r m a l i s m e r til b r u g v e d d a t a m a t i s k ling vi st ik . N å r m a n i 'gamle dage' l a v e d e et s y s t e m til s p r o g l i g a n a lyse, g j o r d e m a n d e t o f t e s t på d e n må de , at m a n s k r e v et p r o gram, i h v i l k e t m a n u d t r y k t e al d e n v i de n, d e r s k u l l e bruges.
{ "name": [ "Maegaard, Bente" ], "affiliation": [ null ] }
null
null
Proceedings of the 4th Nordic Conference of Computational Linguistics ({NODALIDA} 1983)
1984-05-01
0
0
null
null
.D A l t e r n a t i v e t er, at m a n m å la ve li ge så m a n g e re gler, s o m d e r er 'forkerte' r ae k k e f ø l g e r for or d e n e ; d e t t e er for d e t f ø rs te b e s v ae r l i g t , for de t a n d e t b e t y d e r det, at m a n s t e r M a a s an d B e n t e M a e g a a r d : S y n t a x an d S e m a n t i c s A F o r m a l i s m , EEC, 1984 (ikke f r it ti l g ae n g e l i g ) .168 Proceedings of NODALIDA 1983
null
null
null
Main paper: he r b e s t å r s å v e l v e n s t r e s i de s o m h ø j r e s i d e af t r ae s t r u k t u r e r m e d k n u d e o p l y s n i n g e r . f o r m a t e t s e r s å l e d e s ud: g e o m e t r y c o n d i t i o n s a s s i g n m e n t s < s p e c i f i k a t i o n af e t t r ae > ^ < s p e c i f i k a t i o n af et t r ae > < b e t i n g e l s e r på d e k o r a t i o n e r n e > < t i l s k r i v n i n g af v ae r d i e r til h ø j r e s i d: .D A l t e r n a t i v e t er, at m a n m å la ve li ge så m a n g e re gler, s o m d e r er 'forkerte' r ae k k e f ø l g e r for or d e n e ; d e t t e er for d e t f ø rs te b e s v ae r l i g t , for de t a n d e t b e t y d e r det, at m a n s t e r M a a s an d B e n t e M a e g a a r d : S y n t a x an d S e m a n t i c s A F o r m a l i s m , EEC, 1984 (ikke f r it ti l g ae n g e l i g ) .168 Proceedings of NODALIDA 1983 Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
489
0
null
null
null
null
null
null
null
null
2eb4ad77ed8862cfa49002fb149299d4d696750e
37595071
null
Tagging and Parsing {F}innish
e s . E.g. W i n o g r a d (1983, 5 4 4 -5 ) a r gu es that m o r p h o l o g i c a l p h e n o m e n a d o not l e n d t h e m s e l v e s w e l l to the m e t h o d o l o g y of g e n e r a t i v e g r a m m a r b e c a u s e of th ei r h i g h d e g r e e of i r r e g u l a r i t y and id i o s y n c r a c y . F u r t h e r m o r e , he o p i n e s t h a t a n y a n a l y s i s s e e k i n g t o f i n d m o r p h o l o g i c a l r e g u l a r i t i e s m u s t e x a m i n e w o r d s in t e r m s of t h e i r h i s t o r y . H e e v e n g o e s so f a r as to a r g u e t h a t , in c o n t r a s t to w h a t h o l d s f o r s y n t a x , n a t i v e s p e a k e r s d o n o t u t i l i z e g r a m m a t i c a l k n o w l e d g e in t h e p r o d u c t i o n a n d u n d e r s t a n d i n g of m o r p h o l o g i c a l s t r u c t u r e s . H e c o n c e d e s , t h o u g h , t h a t t h e r e a r e a f e w h i g h l y p r o d u c t i v e m o r p h o l o g i c a l p h e n o m e n a that c a n n o t be h a n d l e d by l e x i c a l look-up. B u t t h e b u l k of m o r p h o l o g y is a n y w a y t o b e d e p o s i t e d in t h e l e x i c o n just by l i s t i n g i n d i v i d u a l forms. T h i s v i e w does, perh ap s, d e s c r i p t i v e j u s t i c e to l a r g e p o r t i o n s of E n g l i s h m o r p h o l o g y e v e n t h o u g h t h e d e m a r c a t i o n l i n e b e t w e e n s y nt ax and m o r p h o l o g y s e e m s u n n e c e s s a r i l y s t r i c t e v e n here. B u t a g e n e r a l m o d e l of m o r p h o l o g i c a l c o m p e t e n c e a n d p r o c e s s i n g m u s t s u r e l y p r o v i d e s t r o n g e r m e a n s for d e a l i n g w i t h e.g. t h e p l e t h o r a of w o r d f o r m s f o u n d in m o r e s y n t h e t i c l a n guages.
{ "name": [ "Karlsson, Fred" ], "affiliation": [ null ] }
null
null
Proceedings of the 4th Nordic Conference of Computational Linguistics ({NODALIDA} 1983)
1984-05-01
2
0
null
1 2 3 4 5 6 7 8 9 Vl V 2 V3 P A SS Ni N 2 A i A 2 N 3125 125 Proceedings of NODALIDA 1983 h a v e d e v i s e d a m o r p h o l o g i c a l t a g g i n g p r o g r a m , F I N T A G , t c h a n g e s h a v e to be m a d e , i.e. t h e p a r t of s p e e c h l a b e l is p r o p e r a n d d i s a m b i g u a t e d in c o n t e x v e r , T.G. 1970. T h e c o g n i t i v e b a s i s f o r l i n g u i s t i c s t r u c tures. In J.R. H a y e s (ed.). C o g n i t i o n a n d t h e d e v e l o p m e n t o f l a n g u a g e , J o h n W i l e y & Sons, N.Y., 279-352. Br o d d a , 3" 1982. P r o b l e m s w i t h t a g g i n g -a n d a so lu ti on . N o r d i c J o u r n a l of L i n g u i s t i c s 5:2, 93-116 .
ESIT EL MI NA .Final output131 131 Proceedings of NODALIDA 1983
null
null
f o l l o w i n g c o n t e n t s (K ar ls so n 1983). s y n t h e t i c m o r p h o l o g i c a l s y s t e m s , be they a g g l u t i n a t i v e as T u r k i s h or s e m i -a g g l u t i n a t i v e as F i n n i s h , by w a y of m e r e l e x i c a l l i s t i n g . T h e v a s t m a j o r i t y of t h e 4 0 0 , 0 0 0 f o r m s u n d e r s c r u t i n y c e r t a i n l y a r e b o t h m o r p h o l o g i c a l l y a n
Main paper: a m o d e l of d e r i v a t i o n a l m o r p h o l o g y: 1 2 3 4 5 6 7 8 9 Vl V 2 V3 P A SS Ni N 2 A i A 2 N 3125 125 Proceedings of NODALIDA 1983 h a v e d e v i s e d a m o r p h o l o g i c a l t a g g i n g p r o g r a m , F I N T A G , t c h a n g e s h a v e to be m a d e , i.e. t h e p a r t of s p e e c h l a b e l is p r o p e r a n d d i s a m b i g u a t e d in c o n t e x v e r , T.G. 1970. T h e c o g n i t i v e b a s i s f o r l i n g u i s t i c s t r u c tures. In J.R. H a y e s (ed.). C o g n i t i o n a n d t h e d e v e l o p m e n t o f l a n g u a g e , J o h n W i l e y & Sons, N.Y., 279-352. Br o d d a , 3" 1982. P r o b l e m s w i t h t a g g i n g -a n d a so lu ti on . N o r d i c J o u r n a l of L i n g u i s t i c s 5:2, 93-116 . + p r : t a m a = n k o k o e l m a = n k i r j o i t u k s e = t + v f : 0 = v a t p a r i = a k o l m e a l u k u % u n + v i 3:o t t a = m a = t t a + v p a 2 :s y n t y = n e e = t v i i d e = n + a : v i i m e + n : v u o d e = n + n :a ik a= na . e r a a = t + p r : n i = i = s t a + v f :on + v p p 2 :j u l k i s t e = t t u l e h d i s t q = s s a . e r a a = t ra di o= ss a. e r a a = t: ESIT EL MI NA .Final output131 131 Proceedings of NODALIDA 1983 k o s k e n n i e m i ' s m o d e l a l s o a n a l y z e s c o m p o u n d s p r o v i d e d the (b as e f o r m s of t h e i r ) c o n s t i t u e n t p a r t s a r e in t h e l e x i c o n . c o m p o u n d i n g is s u ch a c e n t r a l m o r p h o l o g i c a l m e a n s in s y n t h e t i c l a n g u a g e s that it m u s t be e a s i l y t r a c t a b l e a l s o in c o m p u t a t i o nal m o d e l s a s p i r i n g g e n e r a l a p p l i c a b i l i t y . t h e th ir d m o r p h o l o g i c a l d o m a i n to be c o v e r e d is d e r i v a tion. t h e to ta l n u m b e r of d e r i v a t i o n a l m o r p h e m e s in f i n n i s h is 15 0-200 d e p e n d i n g u p o n h o w the m o s t o p a q u e an d u n f r e q u e n t o n e s are interpreted. s o m e 50-60 are h i g h l y p r o d u c t i v e and t h e s e are to be d i s c u s s e d here. t h e m a x i m a l p r o d u c t i v e f i n n i s h d e r i v a t i o n a l s y s t e m c o m p r i s e s n i ne m o r p h o t a c t i c p o s i t i o n s w i t h the: f o l l o w i n g c o n t e n t s (K ar ls so n 1983). s y n t h e t i c m o r p h o l o g i c a l s y s t e m s , be they a g g l u t i n a t i v e as T u r k i s h or s e m i -a g g l u t i n a t i v e as F i n n i s h , by w a y of m e r e l e x i c a l l i s t i n g . T h e v a s t m a j o r i t y of t h e 4 0 0 , 0 0 0 f o r m s u n d e r s c r u t i n y c e r t a i n l y a r e b o t h m o r p h o l o g i c a l l y a n Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
489
0
null
null
null
null
null
null
null
null
cbd0f9d76544122135ad16c149f8d1361a5df503
32424839
null
Konkordans over de danske runeindskrifter (Concordance of the {D}anish runic inscriptions) [In {D}anish]
De danske runeindskrifter rummer en meget rig maengde op lysninger af både sproglig og historisk art. Den sproglige form bevirker imidlertid, at runeindskrifterne kan vaere van skelige at bruge som kildemateriale for historikere eller arkaeologer. Det, der kan vaere et nyttigt eller endog et til straekkeligt hjaelpemateriale for filologer, kan for den sprog ligt mindre traenede historiker vaere utilstraekkeligt. Man kan altså naerme sig runeindskrifterne enten med henblik på runoiogiske studier eller med henblik på vikingetidsstudier, og det viser sig, at man får brug for redskaber af lidt forskelligt tilsnit afhaengigt af, om man vaelger den ene eller den anden t ilgang. Størstedelen af irrdskr if tmater ialet, der udgør henved 700 indskrifter daekkende tiden fra ca. 200 til ca. 1500 forefindes i meget velpubliceret form. Hovedvaerket er stadigvaek Lis Jacobsen & Erik Moltke "Danmarks Runeindskrifter" fra 1941-42, i hvilket samtlige frem til 1941 fundne indskrifter er medta get. Dette vaerk blev i 1976 fulgt op af Erik Moltke med bogen "Runerne i Danmark og deres oprindelse", som medtager alle runeindskrifter, der er fundet indtil da. Omend sjaeldent finder man stadigvaek nye runeindskrifter, f.eks. på genstande, som fremdrages ved arkaeologiske udgrav ninger. De omfattende udgravninger af moseofferfundet i Illerup Ådal syd for Århus har bragt nogle genstande for dagen; Illerup Ådal rummer stadig ifølge arkaeologerne umådelige maeng der af genstande, så det vil ikke vaere o ve r r askende, om der også i fremtiden skulle vise sig nye runeindskrifter derfra.
{ "name": [ "Holmboe, Henrik" ], "affiliation": [ null ] }
null
null
Proceedings of the 4th Nordic Conference of Computational Linguistics ({NODALIDA} 1983)
1984-05-01
0
0
null
drotning, f, 1) (til droltin "herre",) 'd r o n n in g ', h u s fru e , fru e ', rhafnukatufl hiau runaR l>a8l aft {lunil trutnik slna 26 Lxborg. 2) kvinde al fornem stand, glit med eller aetling af drollin'; som titel gengives ordet bedst ved frue. qsur latblrliin . . ralst runnr |iasl at 4sbu]> trunik 134 Ravnkilde 1 (sv. p.^virket).-Akk. sg. trutnik 26, trunik' 134.Anm. 1. Se sp.52 samt VVimnier.DRM.il p.Slf.; Jo/iSleens/rup. FestskrErsIcv (1927) drottin, m. 1) "d ro t", h e rre , qsur sati ■ tin Itq n sl n f t u a l t u k n t r u t ln s in 131', 209', t u k a kurms gun s a R bulan t r u t ln 295'. 2) om jae tte r n e s h e rsk e r, fy rs te , jtu r ulgl |>lk (|>)or8a t r u t l n 419*. 3) [Jfr. dominus]
null
null
null
null
Main paper: : drotning, f, 1) (til droltin "herre",) 'd r o n n in g ', h u s fru e , fru e ', rhafnukatufl hiau runaR l>a8l aft {lunil trutnik slna 26 Lxborg. 2) kvinde al fornem stand, glit med eller aetling af drollin'; som titel gengives ordet bedst ved frue. qsur latblrliin . . ralst runnr |iasl at 4sbu]> trunik 134 Ravnkilde 1 (sv. p.^virket).-Akk. sg. trutnik 26, trunik' 134.Anm. 1. Se sp.52 samt VVimnier.DRM.il p.Slf.; Jo/iSleens/rup. FestskrErsIcv (1927) drottin, m. 1) "d ro t", h e rre , qsur sati ■ tin Itq n sl n f t u a l t u k n t r u t ln s in 131', 209', t u k a kurms gun s a R bulan t r u t ln 295'. 2) om jae tte r n e s h e rsk e r, fy rs te , jtu r ulgl |>lk (|>)or8a t r u t l n 419*. 3) [Jfr. dominus] Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
489
0
null
null
null
null
null
null
null
null
a9f267bd53dfa911730fdead9b1fa3fd7de00210
10738932
null
{GESA}, et {GE}nerelt System til Analyse af naturlige sprog, udformet som et overs{\ae}tter-for-tolker system med virtuel mellem-kode ({GESA}, a {GE}neral System for Analysis of natural language, designed as a compiler system with virtual bytecode) [In {D}anish]
Njalsgade 96 DK 2300 kbh. S. GESA, et GEnerelt System til Analyse af naturlige sprog, ud formet som et oversaetter-fortolker system med virtuel mellem kode . Parsingsystemer til automatisk analyse af naturlige sprog kan udformes på utallige måder -og det gaelder naesten uanset hvilken metode, man vaelger for selve parsing-processen. Det er et større projekt at udforme en parser til et rimelig stort udsnit af naturligt sprog. Som regel vil udformningen af så store systemer ske i flere tempi: Primaert sker den i kravspecifikationsfasen, hvor der med udgangspunkt i bl.a. brugerbehov og ind-og uddata opstilles en raekke krav til sy stemet; Sekundaert i designfasen, hvor systemets struktur, al goritmer og datastrukturer endeligt fastlaegges. I designfasen vil det typisk vaere sådan, at der i forhold til de opstillede krav findes flere løsninger. I denne artikel vil jeg kort skitsere fire system-modeller, og derefter opridse nogle af de overvejelser i designfasen, der førte til at GESA-systemet blev udformet som et oversaet ter-fortolker system med virtuel mellemkode. Jeg har altså her anlagt en typisk "system-designer synsvinkel" på udform ningsproblematikken, idet jeg har set bort fra alle overve jelser angående systemets sproglige kapacitet. En mere udførlig beskrivelse af GESA, hvor også disse overvejelser er taget med, er givet i SAML nr. 10. Skemaet med de fire modeller. De fire udformningsmåder -eller rettere system-modellerjeg har valgt at tage med her, kan opstilles i et skema, hvor der er taget hensyn til to forhold: 74 GESA, et GEnerelt System til Analyse af naturlige sprog, udformet som et oversae tter-fortolker system med virtuel mellem-kode Jens Erlandsen Proceedings of NODALIDA 1983, pages 74-83
{ "name": [ "Erlandsen, Jens" ], "affiliation": [ null ] }
null
null
Proceedings of the 4th Nordic Conference of Computational Linguistics ({NODALIDA} 1983)
1984-05-01
0
0
null
For det første om det samlede parsing-system indeholder en praeprocessor i en eller anden form, der behandler sprogbe skrivelsen, inden den anvendes af parseren, eller om det ikke indeholder en sådan processor. I denne model er parser og grammatik bygget sammen 1 en uad skillelig helhed. Der findes altså ikke nogen saerlig praepro cessor (programmeringssprogets eventuelle oversaetter er ikke medregnet), der behandler en grammatik; og hvis man overhove det kan tale om en grammatik, er den i hvert fald ikke formu leret i en saerlig formalisme (programmeringssproget taeller altså ikke som saerlig formalisme 1 denne sammenhaeng).Er der tale om et eksperimental-system, hvor sprogbeskrivel sen hyppigt aendres og udvides, bliver sådanne systemer let kaotiske og uoverskuelige, hvis ikke man anvender nogle gen nemgående principper for data-og process-strukturen.Et sådant princip kunne fx vaere recursive-descent (se Aho 4 Ullman 1977), der kort beskrevet går ud på, at hvert non-terminal i grammatikken (fx på EBNF-form) får sin egen proce dure .
null
null
null
null
Main paper: : For det første om det samlede parsing-system indeholder en praeprocessor i en eller anden form, der behandler sprogbe skrivelsen, inden den anvendes af parseren, eller om det ikke indeholder en sådan processor. I denne model er parser og grammatik bygget sammen 1 en uad skillelig helhed. Der findes altså ikke nogen saerlig praepro cessor (programmeringssprogets eventuelle oversaetter er ikke medregnet), der behandler en grammatik; og hvis man overhove det kan tale om en grammatik, er den i hvert fald ikke formu leret i en saerlig formalisme (programmeringssproget taeller altså ikke som saerlig formalisme 1 denne sammenhaeng).Er der tale om et eksperimental-system, hvor sprogbeskrivel sen hyppigt aendres og udvides, bliver sådanne systemer let kaotiske og uoverskuelige, hvis ikke man anvender nogle gen nemgående principper for data-og process-strukturen.Et sådant princip kunne fx vaere recursive-descent (se Aho 4 Ullman 1977), der kort beskrevet går ud på, at hvert non-terminal i grammatikken (fx på EBNF-form) får sin egen proce dure . Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
489
0
null
null
null
null
null
null
null
null
f0844c833dde7af8c118db956b1b89bf77f10ac5
30799516
null
Logik anvendt til overs{\ae}ttelse af japansk (Logic used for translation of {J}apanese) [In {D}anish]
På Datalogisk Institut ved Københavns Universitet eksperimenteres for tiden me d at anvende logikprogrammering som hjaelpemiddel ved automa tiseret oversaettelse fra japansk til engelsk og dansk. Logikprogram m e r i n g s s p r o g e t P r ol og a n v e n d e s t i l s y n t a k s a n a l y s e af j a p a n s k , opbygning af en praedikatlogisk repraesentation af teksten, fortolkning af de n n e og ved h j ae l p af o r d b ø g e r o v e r s ae t t e l s e til e n g e l s k eller dansk. 1. Bemaerkninger om det japanske sprog. Japansk er ikke b e s l ae g t e t m e d n o g e t a n d e t sp r o g (måske bortset fra koreansk), og det har sin egen måde at beskrive grammatik på. Indo-europaeiske begreber passer i virkeligheden ikke saerligt godt til japansk. På trods af d e t t e vil b e s k r i v e l s e n af d e t j a p a n s k e s p r o g an vende i n d o -e u r o p ae i s k t e r m i n o l o g i , da det formodes, at laeseren er mest fortrolig m e d denne. Der er a l t s å tale o m et v i s t misforhold mellem beskrive-måde og det beskrevne. Endvidere er der af f r e m s t i ll i n g s m ae s s s i g e g r u n d e f o r e t a g e t n o g l e s i m p l i f i c e r i n g e r , f.eks. i omtalen af partikler. 1.1 Saetningskonstruktion. Principielt er enhver o r d r ae k k e f ø l g e till ad t, blot verbet kommer til sidst. I de allerfleste tilfaelde vil man dog se denne raekkefølge: subjekt indirekte-objekt objekt verbum. Subjektet udelades som regel, hvis det fremgår af sammenhaengen. Kasus angives ved e f t e r s t i l l e d e p a r t i k l e r , o g s å k a l d e t p o s t p o s i t i o n e r . Blandt de vigtigste partikler kan naevnes: "wa" el. "ga" (subjektspartikel), "ni" (indir. objektspartikel), "o" (objektspartikel). Der er dog en vis forskel på "wa" og "ga", som det vil føre for vidt at komme ind på her. Enhver underordnet s ae t n i n g skal k o m m e før d e n ti lh ør en de hovedsaet ning. Da v e r b e t a l t i d står s i d s t i en s ae t n i n g , v i l v e r b e t i en re lativ s ae tn in g stå u m i d d e l b a r t f o r a n de t s u b s t a n t i v , s o m de t er relativt til. Der findes ikke relative pronominer på japansk.
{ "name": [ "Bernth, Arendse" ], "affiliation": [ null ] }
null
null
Proceedings of the 4th Nordic Conference of Computational Linguistics ({NODALIDA} 1983)
1984-05-01
0
0
null
"wa" el. "ga" (subjektspartikel), "ni" (indir. objektspartikel), "o" (objektspartikel).Der er dog en vis forskel på "wa" og "ga", som det vil føre for vidt at komme ind på her.
null
null
null
null
Main paper: blandt de vigtigste partikler kan naevnes:: "wa" el. "ga" (subjektspartikel), "ni" (indir. objektspartikel), "o" (objektspartikel).Der er dog en vis forskel på "wa" og "ga", som det vil føre for vidt at komme ind på her. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
489
0
null
null
null
null
null
null
null
null
1f77c1bb4a727f89b4dbc40b0a13137aca7be427
9093102
null
Utnyttjande av ordklasser for f{\"o}rfattarbest{\"a}mning (Utilization of part of speech for authorship attribution) [In {S}wedish]
Hösten 1974 dök det i Paris upp en sensationell skrift i vilken 1965 års no belpristagare Michail Sjolochov anklagades fÖr plagiat. Studien hade skrivits av en senare avliden sovjetisk kritiker D, och förordet hade skrivits av Alex ander Solzjenitsyn, som helt stödde slutsatsen: det mesta av Stilla flyter Don har inte skrivits av Michail Sjolochov utan av en annan kosackförfattare, Fedor Krjukov. Detta är bakgrunden till att det hösten 1975 bildades ett svenskt-norskt team som tog sig an frågan: "Vem har skrivit Stilla flyter Don?" Deltagare var Sven Gustavsson och Bengt Beckman från Sverige och Geir Kjétsaa och Steinar Gil från Norge. Avsikten med projektet var dels att lösa författarproblemet, dels att testa kvantitativa metoder för stilanalys och författarskapsundersökningar. Undersökningen är nu -efter några avbrott -slutförd och resultaten redovi sas i bokform.^ Föredraget behandlar en av de studier som gjorts. Studien utgör en utvidgning av en metod som beskrivits vid två tidigare till fällen, dels vid ett låtsat författarbestämningsförfarande på material av de sovjetiska författarna K Simonov och K Paustovskij, dels vid de preliminära undersökningarna i det mera spektakulära fallet med Krjukov och Sjolochov som möjliga författare till Stilla flyter Don. Det kriterium som användes i det första fallet var ordklassfördelning i meningsbörjan och meningsslut. I Geir Kjetsaas pilotstudie "Storms on the Quiet Don" undersöktes Sjolochov, Krjukov och "Stilla flyter Don" ur bl a denna aspekt. Resultaten, som klart talar för Sjolochov, har presenterats i andra sammanhang och skall ej beröras här. Emellertid bekräftade undersökningen ordklasskriteriets användbarhet. Jag har därför i den större, datorstödda undersökningen i samma ämne, vars resultat nu föreligger, använt ordklass kombinationer totalt och inom hela meningar som kriterium.
{ "name": [ "Beckman, Bengt" ], "affiliation": [ null ] }
null
null
Proceedings of the 4th Nordic Conference of Computational Linguistics ({NODALIDA} 1983)
1984-05-01
0
0
null
null
null
null
null
EQUATION24 24 Proceedings of NODALIDA 1983 (U 0 ) E h c n . Q) a >1 4J 'i*rocot-^inT-mmro'TromvofNvo'^nrrin O Q E h tn Q) u H C Q ) ■ P u3r'(N(7>voir)<Tvininr'vor~r^'^oo'3'vDr~voin u i c o o E -t w 60 >-l 0 X I U l ' V r o <Ti cr> m i n T~ T " i n CNJ m <T» f O i n e i-l CN T- ( N OD VD o o O 0 0 "«T i n r * r * 0 0 CN r - o f N VD D s IW O o •> r " f - r~< N v n r o m o G V T -< N i n 00 C N i n o v i n r -i n T -v o o v 00 r o J-r -vProceedings of NODALIDA 1983
Main paper: s j u ga -m e l o v a j a c h r e b t i n a gory.: 24 24 Proceedings of NODALIDA 1983 (U 0 ) E h c n . Q) a >1 4J 'i*rocot-^inT-mmro'TromvofNvo'^nrrin O Q E h tn Q) u H C Q ) ■ P u3r'(N(7>voir)<Tvininr'vor~r^'^oo'3'vDr~voin u i c o o E -t w 60 >-l 0 X I U l ' V r o <Ti cr> m i n T~ T " i n CNJ m <T» f O i n e i-l CN T- ( N OD VD o o O 0 0 "«T i n r * r * 0 0 CN r - o f N VD D s IW O o •> r " f - r~< N v n r o m o G V T -< N i n 00 C N i n o v i n r -i n T -v o o v 00 r o J-r -vProceedings of NODALIDA 1983 g c: EQUATION Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
489
0
null
null
null
null
null
null
null
null
a4771097e38224f41fb0b08741e05300f6648758
8277151
null
Conceptual and Linguistic Decisions in Generation
Generation of texts in natural language requires making conceptual and linguistic decisions. This paper shows first that these decisions involve the use of a discourse grammar, secondly that they are all dependent on one another but that there is a priori no reason to give priority to one decision rather than another. As a consequence, a generation algorithm must not be modularized in components that make these decisions in a fixed order.
{ "name": [ "Danlos, Laurence" ], "affiliation": [ null ] }
null
null
10th International Conference on Computational Linguistics and 22nd Annual Meeting of the Association for Computational Linguistics
1984-07-01
8
60
null
To express in natural language the information given in a semantic representation, at least two kinds of decisions have to be made: "conceptual decisions" and "linguistic decisions". Conceptual decisions are concerned with questions such as: in what order must the information appear in the text? which information must be expressed explicitly and what can be left implicit?Linguistic decisions deal with questions such as: which lexical items to choose? which syntactic constructions to choose? how to cut the text into paragraphs and sentences?The purpose of this paper is to show that conceptual decisions and linguistic decisions cannot be made independently of one another, and therefore, that a generation system must be based on procedures that promote intimate interaction between conceptual and linguistic decisions.In particular, our claim is that a generation process cannot be modularized into a "conceptualizer" module making conceptual decisions regardless of any linguistic considerations, passing its output to a "dictionary" module which would figure out the lexical items to use accordingly, which would then in turn forward its results to a "grammar", where the appropriate syntactic constructions are chosen and then developed into sentences by a "syntactic component".In such generation systems (cf. (McDonald 1983) and (McKeown 1982) ), it is assumed that the conceptualizer is language-free, i.e., need have no linguistic knowledge. This assumption is questionable, as we are going to show. Furthermore, in such modularized systems, the linguistic decisions must, clearly, be made so as to respect the conceptual ones. This consequence would be acceptable if the best lexical choices, i.e., the most precise, concise, evocative terms that can be chosen, always agree with the conceptual decisions. However, there exist cases in which the best lexical choices and the conceptual decisions are in conflict.To prove our theoritical points, we will take as an example the generation of situations involving a result causation, i.e., a new STATE which arises because of one (or several) prior ACTs (Schank 1975 ). An illustration of a result causation is given in the following semantic representation
null
null
Given a result causation, one decision that a language-free conceptualizer might well need to make would be whether tO express the STATE first and then the ACT, or to choose the opposite order. If these decisions were passed on to a dictionary, the synthesis of (A) above would be texts like Such texts don't follow conceptual decisions dissociating the STATE and its cause: to kill (in the construction No V N1 =: John killed Mary) expresses in the same time the death of N1 and the fact that this death is due to an action (not specified) of No (McCawley 1971) . We showed in (Danlos 1984 ) that a formulation embodying a verb with a causal semantics such as to kill to describe the RESULT, and another verb to describe the ACT is, in most of the cases, preferable to a formulation composed of a phrase for the STATE and another one for the ACT. This result indicates that conceptual decisions should not be made without taking into account the possibilities provided by the language, in the present case, the existence of verbs with a causal semantics such as to kill, This attitude is also imperative if a generator is to produce frozen phrases. The meaning of a frozen sentence being not calculable from the meaning of its constituents, frozen phrases cannot be generated from a language-free conceptualizer forwarding its decisions to a dictionary ]1.Let us suppose that a result causation is to be generated by means of two verbs, one with a causal semantics such as to kill for the RESULT, and one for the ACT, and let us look at the ways to form a text embodying these two verbs. The options available are the following:order of the information. There are two possibilities. Either the phrase expressing the RESULT or the phrase expressing the ACT occurs first.number of sentences. There are two possibilities.Either combine the phrases expressing the RESULT and the ACT into a complex sentence, as in (2) (John shot Mary in the head, killing her.), or form a text made up of two sentences, one describing the ACT, one describing the RESULT, as in (1) (Mary was killed by John. He shot her in the head.).-choice of syntactic constructions. We will restrict ourselves to the active construction and to the passive one. For the latter, there is the choice between passive with an agent and passive without an agent. On the whole, for each of the two verbs involved, there are three possibilities. These types of linearization are nOt predictable.As a consequence, they must be provided to the generator. This one must embody in its data the structures of the texts corresponding to the 15 feasible combinations. These structures constitute a real discourse grammar for result causations.The formulation of result causations must be modelled on one of the 15 discourse structures 3. Generating a result causation thus entails selecting one of these discourse structures.The fact that only 15 discourse structures out of 36 possibilities are feasible shows that it is not possible to make decisions about order of information, segmentation into sentences and syntactic constructions independently of one another. To do so could potentially result in awkward texts more than half the time. So, if the verb to assassinate is to be used, all of the 3. This point is akin to an assumption supported by (McKeown 1982) , except that ours discourse structures contain linguistic information contrarily to hers which indicate only the order in which the information must appear.John shot the Pope in the head, thereby assassinating Aim in a spectacular way. John shot the Pope in the head. Thereby he assassinated him in a spectacular way.discourse structures in which the RESULT appears after the ACT are inappropriate.On the other hand, if a discourse structure where the RESULT occurs after the ACT is selected, the use of to assassinate is forbidden.At this point, we have shown that decisions about lexical choice, order of the information, segmentation into sentences and syntactic constructions are all dependent on one another. This result is fundamental in generation since it has an immediate consequence: ordering these decisions amounts to giving them an order of priority.There is no general rule stating to which decisions priority must be given. It can vary from one case to another.For example, if a semantic representation describes a suicide, it is obviously appropriate to use to commit suicide. To do so, priority must be given to the lexical choice and not to the order of the information. If the order ACT-RESULT has been selected, it precludes the use of to commit a suicide which cannot occur after the description of the act performed to accomplish the suicide: On the other hand, if a result causation is part of a bigger story, and if strictly chronological order has been chosen to generate the whole story, then the result causation should be generated in the order ACT-RESULT.In other words, the order of the information should be given priority. In other situations, there is no clear evidence for giving priority to one decision over another one. As an illustration, let us take the case of a result causation which occurs in the context of a crime. It can be stated that the result DEAD must be expressed by:-to assassinate as a first choice, to kill as a second choice, if the target is famous to murder as a first choice, to kill as a second choice, if the target is not famous Moreover, the most appropriate order is, in general, RESULT-ACT if the target is famous, and ACT-RESULT otherwise.In the case of a famous target, the use of to assassinate is not in contradiction with the decision about the order of the information. But in the case of a non-famous • arget, the use of to murder doesn't fit the order ACT-RESULT, for this verb cannot occur after a description of the ACT:• John shot Mary in the head, murdering her.• John shot Mary in the head. He murdered her.Therefore, either the decision about the order of the information or the decision to use to murder has to be. The deletion of the agent leads to a formu]abon which is correct Mary was killed by being shot but which does not express the author of the crime.
null
Main paper: introduction: To express in natural language the information given in a semantic representation, at least two kinds of decisions have to be made: "conceptual decisions" and "linguistic decisions". Conceptual decisions are concerned with questions such as: in what order must the information appear in the text? which information must be expressed explicitly and what can be left implicit?Linguistic decisions deal with questions such as: which lexical items to choose? which syntactic constructions to choose? how to cut the text into paragraphs and sentences?The purpose of this paper is to show that conceptual decisions and linguistic decisions cannot be made independently of one another, and therefore, that a generation system must be based on procedures that promote intimate interaction between conceptual and linguistic decisions.In particular, our claim is that a generation process cannot be modularized into a "conceptualizer" module making conceptual decisions regardless of any linguistic considerations, passing its output to a "dictionary" module which would figure out the lexical items to use accordingly, which would then in turn forward its results to a "grammar", where the appropriate syntactic constructions are chosen and then developed into sentences by a "syntactic component".In such generation systems (cf. (McDonald 1983) and (McKeown 1982) ), it is assumed that the conceptualizer is language-free, i.e., need have no linguistic knowledge. This assumption is questionable, as we are going to show. Furthermore, in such modularized systems, the linguistic decisions must, clearly, be made so as to respect the conceptual ones. This consequence would be acceptable if the best lexical choices, i.e., the most precise, concise, evocative terms that can be chosen, always agree with the conceptual decisions. However, there exist cases in which the best lexical choices and the conceptual decisions are in conflict.To prove our theoritical points, we will take as an example the generation of situations involving a result causation, i.e., a new STATE which arises because of one (or several) prior ACTs (Schank 1975 ). An illustration of a result causation is given in the following semantic representation conceptual decisions and lexical choice: Given a result causation, one decision that a language-free conceptualizer might well need to make would be whether tO express the STATE first and then the ACT, or to choose the opposite order. If these decisions were passed on to a dictionary, the synthesis of (A) above would be texts like Such texts don't follow conceptual decisions dissociating the STATE and its cause: to kill (in the construction No V N1 =: John killed Mary) expresses in the same time the death of N1 and the fact that this death is due to an action (not specified) of No (McCawley 1971) . We showed in (Danlos 1984 ) that a formulation embodying a verb with a causal semantics such as to kill to describe the RESULT, and another verb to describe the ACT is, in most of the cases, preferable to a formulation composed of a phrase for the STATE and another one for the ACT. This result indicates that conceptual decisions should not be made without taking into account the possibilities provided by the language, in the present case, the existence of verbs with a causal semantics such as to kill, This attitude is also imperative if a generator is to produce frozen phrases. The meaning of a frozen sentence being not calculable from the meaning of its constituents, frozen phrases cannot be generated from a language-free conceptualizer forwarding its decisions to a dictionary ]1.Let us suppose that a result causation is to be generated by means of two verbs, one with a causal semantics such as to kill for the RESULT, and one for the ACT, and let us look at the ways to form a text embodying these two verbs. The options available are the following:order of the information. There are two possibilities. Either the phrase expressing the RESULT or the phrase expressing the ACT occurs first.number of sentences. There are two possibilities.Either combine the phrases expressing the RESULT and the ACT into a complex sentence, as in (2) (John shot Mary in the head, killing her.), or form a text made up of two sentences, one describing the ACT, one describing the RESULT, as in (1) (Mary was killed by John. He shot her in the head.).-choice of syntactic constructions. We will restrict ourselves to the active construction and to the passive one. For the latter, there is the choice between passive with an agent and passive without an agent. On the whole, for each of the two verbs involved, there are three possibilities. These types of linearization are nOt predictable.As a consequence, they must be provided to the generator. This one must embody in its data the structures of the texts corresponding to the 15 feasible combinations. These structures constitute a real discourse grammar for result causations.The formulation of result causations must be modelled on one of the 15 discourse structures 3. Generating a result causation thus entails selecting one of these discourse structures.The fact that only 15 discourse structures out of 36 possibilities are feasible shows that it is not possible to make decisions about order of information, segmentation into sentences and syntactic constructions independently of one another. To do so could potentially result in awkward texts more than half the time. So, if the verb to assassinate is to be used, all of the 3. This point is akin to an assumption supported by (McKeown 1982) , except that ours discourse structures contain linguistic information contrarily to hers which indicate only the order in which the information must appear. these forms become acceptable if they are added adverbial phrases:: John shot the Pope in the head, thereby assassinating Aim in a spectacular way. John shot the Pope in the head. Thereby he assassinated him in a spectacular way.discourse structures in which the RESULT appears after the ACT are inappropriate.On the other hand, if a discourse structure where the RESULT occurs after the ACT is selected, the use of to assassinate is forbidden.At this point, we have shown that decisions about lexical choice, order of the information, segmentation into sentences and syntactic constructions are all dependent on one another. This result is fundamental in generation since it has an immediate consequence: ordering these decisions amounts to giving them an order of priority.There is no general rule stating to which decisions priority must be given. It can vary from one case to another.For example, if a semantic representation describes a suicide, it is obviously appropriate to use to commit suicide. To do so, priority must be given to the lexical choice and not to the order of the information. If the order ACT-RESULT has been selected, it precludes the use of to commit a suicide which cannot occur after the description of the act performed to accomplish the suicide: On the other hand, if a result causation is part of a bigger story, and if strictly chronological order has been chosen to generate the whole story, then the result causation should be generated in the order ACT-RESULT.In other words, the order of the information should be given priority. In other situations, there is no clear evidence for giving priority to one decision over another one. As an illustration, let us take the case of a result causation which occurs in the context of a crime. It can be stated that the result DEAD must be expressed by:-to assassinate as a first choice, to kill as a second choice, if the target is famous to murder as a first choice, to kill as a second choice, if the target is not famous Moreover, the most appropriate order is, in general, RESULT-ACT if the target is famous, and ACT-RESULT otherwise.In the case of a famous target, the use of to assassinate is not in contradiction with the decision about the order of the information. But in the case of a non-famous • arget, the use of to murder doesn't fit the order ACT-RESULT, for this verb cannot occur after a description of the ACT:• John shot Mary in the head, murdering her.• John shot Mary in the head. He murdered her.Therefore, either the decision about the order of the information or the decision to use to murder has to be. The deletion of the agent leads to a formu]abon which is correct Mary was killed by being shot but which does not express the author of the crime. Appendix:
null
null
null
null
{ "paperhash": [ "brady|natural_language_generation_as_a_computational_problem:_an_introduction", "mckeown|generating_natural_language_text_in_response_to_questions_about_database_structure", "appelt|planning_natural_language_utterances_to_satisfy_multiple_goals" ], "title": [ "Natural Language Generation as a Computational Problem: an Introduction", "Generating natural language text in response to questions about database structure", "Planning natural language utterances to satisfy multiple goals" ], "abstract": [ "This chapter contains sections titled: Introduction, Results for Test Speakers, A Computational Model, The Relationship Between the Speaker and the Linguistics Component, The Internal Structure of the Linguistic Component, An Example, Contributions and Limitations", "There are two major aspects of computer-based text generation: (1) determining the content and textual shape of what is to be said; and (2) transforming that message into natural language. Emphasis in this research has been on a computational solution to the questions of what to say and how to organize it effectively. A generation method was developed and implemented in a system called TEXT that uses principles of discourse structure, discourse coherency, and relevancy criterion. \nThe main features of the generation method developed for the TEXT strategic component include (1) selection of relevant information for the answer, (2) the pairing of rhetorical techniques for communication (such as analogy) with discourse purposes (for example, providing definitions) and (3) a focusing mechanism. Rhetorical techniques, which encode aspects of discourse structure, are used to guide the selection of propositions from a relevant knowledge pool. The focusing mechanism aids in the organization of the message by constraining the selection of information to be talked about next to that which ties in with the previous discourse in an appropriate way. \nThis work on generation has been done within the framework of a natural language interface to a database system. The implemented system generates responses of paragraph length to questions about database structure. Three classes of questions have been considered: questions about information available in the database, requests for definitions, and questions about the differences between database entities. \nThe main theoretical results of this research have been on the effect of discourse structure and focus constraints on the generation process. A computational treatment of rhetorical devices has been developed which is used to guide the generation process. Previous work on focus of attention has been extended for the task of generation to provide constraints on what to say next. The use of these two interacting mechanisms constitutes a departure from earlier generation systems. The approach taken in this research is that the generation process should not simply trace the knowledge representation to produce text. Instead, communicative strategies people are familiar with are used to effectively convey information. This means that the same information may be described in different ways on different occasions.", "This dissertation presents the results of research on a planning formalism for a theory of natural language generation that incorporates generation of utterances that satisfy multiple goals. Previous research in the area of computer generation of natural language utterances has concentrated on one of two aspects of language production: (1) the process of producing surface syntactic forms from an underlying representation, and (2) the planning of illocutionary acts to satisfy the speaker's goals. This work concentrates on the interaction between these two aspects of language generation and considers the overall problem to be one of refining the specification of an illocutionary act into a surface syntactic form, emphasizing the problems of achieving multiple goals in a single utterance. \nPlanning utterances requires an ability to do detailed reasoning about what the hearer knows and wants. A formalism, based on a possible worlds semantics of an intensional logic of knowledge and action, was developed for representing the effects of illocutionary acts and the speaker's beliefs about the hearer's knowledge of the world. Techniques are described that enable a planning system to use the representation effectively. \nThe language planning theory and knowledge representation are embodied in a computer system called KAMP (Knowledge And Modalities Planner) which plans both physical and linguistic actions, given a high level description of the speaker's goal. \nThe research has application to the design of gracefully interacting computer systems, multiple-agent planning systems, and planning to acquire knowledge." ], "authors": [ { "name": [ "M. Brady", "R. Berwick" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "K. McKeown" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Appelt" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null ], "s2_corpus_id": [ "63032395", "62743223", "60491098" ], "intents": [ [], [ "result", "background", "methodology" ], [] ], "isInfluential": [ false, true, false ] }
null
487
0.123203
null
null
null
null
null
null
null
null